WorldWideScience

Sample records for normalised eigenvectors vnorm

  1. Matrix with Prescribed Eigenvectors

    Science.gov (United States)

    Ahmad, Faiz

    2011-01-01

    It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…

  2. Covariance expressions for eigenvalue and eigenvector problems

    Science.gov (United States)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  3. Eigenvector space model to capture features of documents

    Directory of Open Access Journals (Sweden)

    Choi DONGJIN

    2011-09-01

    Full Text Available Eigenvectors are a special set of vectors associated with a linear system of equations. Because of the special property of eigenvector, it has been used a lot for computer vision area. When the eigenvector is applied to information retrieval field, it is possible to obtain properties of documents data corpus. To capture properties of given documents, this paper conducted simple experiments to prove the eigenvector is also possible to use in document analysis. For the experiment, we use short abstract document of Wikipedia provided by DBpedia as a document corpus. To build an original square matrix, the most popular method named tf-idf measurement will be used. After calculating the eigenvectors of original matrix, each vector will be plotted into 3D graph to find what the eigenvector means in document processing.

  4. Image denoising via adaptive eigenvectors of graph Laplacian

    Science.gov (United States)

    Chen, Ying; Tang, Yibin; Xu, Ning; Zhou, Lin; Zhao, Li

    2016-07-01

    An image denoising method via adaptive eigenvectors of graph Laplacian (EGL) is proposed. Unlike the trivial parameter setting of the used eigenvectors in the traditional EGL method, in our method, the eigenvectors are adaptively selected in the whole denoising procedure. In detail, a rough image is first built with the eigenvectors from the noisy image, where the eigenvectors are selected by using the deviation estimation of the clean image. Subsequently, a guided image is effectively restored with a weighted average of the noisy and rough images. In this operation, the average coefficient is adaptively obtained to set the deviation of the guided image to approximately that of the clean image. Finally, the denoised image is achieved by a group-sparse model with the pattern from the guided image, where the eigenvectors are chosen in the error control of the noise deviation. Moreover, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the above group sparse model. The experiments show that our method not only improves the practicality of the EGL methods with the dependence reduction of the parameter setting, but also can outperform some well-developed denoising methods, especially for noise with large deviations.

  5. Localized eigenvectors of the non-backtracking matrix

    International Nuclear Information System (INIS)

    Kawamoto, Tatsuro

    2016-01-01

    In the case of graph partitioning, the emergence of localized eigenvectors can cause the standard spectral method to fail. To overcome this problem, the spectral method using a non-backtracking matrix was proposed. Based on numerical experiments on several examples of real networks, it is clear that the non-backtracking matrix does not exhibit localization of eigenvectors. However, we show that localized eigenvectors of the non-backtracking matrix can exist outside the spectral band, which may lead to deterioration in the performance of graph partitioning. (paper: interdisciplinary statistical mechanics)

  6. An adaptive left–right eigenvector evolution algorithm for vibration isolation control

    International Nuclear Information System (INIS)

    Wu, T Y

    2009-01-01

    The purpose of this research is to investigate the feasibility of utilizing an adaptive left and right eigenvector evolution (ALREE) algorithm for active vibration isolation. As depicted in the previous paper presented by Wu and Wang (2008 Smart Mater. Struct. 17 015048), the structural vibration behavior depends on both the disturbance rejection capability and mode shape distributions, which correspond to the left and right eigenvector distributions of the system, respectively. In this paper, a novel adaptive evolution algorithm is developed for finding the optimal combination of left–right eigenvectors of the vibration isolator, which is an improvement over the simultaneous left–right eigenvector assignment (SLREA) method proposed by Wu and Wang (2008 Smart Mater. Struct. 17 015048). The isolation performance index used in the proposed algorithm is defined by combining the orthogonality index of left eigenvectors and the modal energy ratio index of right eigenvectors. Through the proposed ALREE algorithm, both the left and right eigenvectors evolve such that the isolation performance index decreases, and therefore one can find the optimal combination of left–right eigenvectors of the closed-loop system for vibration isolation purposes. The optimal combination of left–right eigenvectors is then synthesized to determine the feedback gain matrix of the closed-loop system. The result of the active isolation control shows that the proposed method can be utilized to improve the vibration isolation performance compared with the previous approaches

  7. Infinitary Combinatory Reduction Systems: Normalising Reduction Strategies

    NARCIS (Netherlands)

    Ketema, J.; Simonsen, Jakob Grue

    2010-01-01

    We study normalising reduction strategies for infinitary Combinatory Reduction Systems (iCRSs). We prove that all fair, outermost-fair, and needed-fair strategies are normalising for orthogonal, fully-extended iCRSs. These facts properly generalise a number of results on normalising strategies in

  8. Semi-supervised eigenvectors for large-scale locally-biased learning

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mahoney, Michael W.

    2014-01-01

    improved scaling properties. We provide several empirical examples demonstrating how these semi-supervised eigenvectors can be used to perform locally-biased learning; and we discuss the relationship between our results and recent machine learning algorithms that use global eigenvectors of the graph......In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks nearby that prespecified target region. For example, one might......-based machine learning and data analysis tools. At root, the reason is that eigenvectors are inherently global quantities, thus limiting the applicability of eigenvector-based methods in situations where one is interested in very local properties of the data. In this paper, we address this issue by providing...

  9. EIGENVECTOR-BASED CENTRALITY MEASURES FOR TEMPORAL NETWORKS*

    Science.gov (United States)

    TAYLOR, DANE; MYERS, SEAN A.; CLAUSET, AARON; PORTER, MASON A.; MUCHA, PETER J.

    2017-01-01

    Numerous centrality measures have been developed to quantify the importances of nodes in time-independent networks, and many of them can be expressed as the leading eigenvector of some matrix. With the increasing availability of network data that changes in time, it is important to extend such eigenvector-based centrality measures to time-dependent networks. In this paper, we introduce a principled generalization of network centrality measures that is valid for any eigenvector-based centrality. We consider a temporal network with N nodes as a sequence of T layers that describe the network during different time windows, and we couple centrality matrices for the layers into a supra-centrality matrix of size NT × NT whose dominant eigenvector gives the centrality of each node i at each time t. We refer to this eigenvector and its components as a joint centrality, as it reflects the importances of both the node i and the time layer t. We also introduce the concepts of marginal and conditional centralities, which facilitate the study of centrality trajectories over time. We find that the strength of coupling between layers is important for determining multiscale properties of centrality, such as localization phenomena and the time scale of centrality changes. In the strong-coupling regime, we derive expressions for time-averaged centralities, which are given by the zeroth-order terms of a singular perturbation expansion. We also study first-order terms to obtain first-order-mover scores, which concisely describe the magnitude of nodes’ centrality changes over time. As examples, we apply our method to three empirical temporal networks: the United States Ph.D. exchange in mathematics, costarring relationships among top-billed actors during the Golden Age of Hollywood, and citations of decisions from the United States Supreme Court. PMID:29046619

  10. Eigenvectors of Open Bazhanov-Stroganov Quantum Chain

    Directory of Open Access Journals (Sweden)

    Nikolai Iorgov

    2006-02-01

    Full Text Available In this contribution we give an explicit formula for the eigenvectors of Hamiltonians of open Bazhanov-Stroganov quantum chain. The Hamiltonians of this quantum chain is defined by the generation polynomial $A_n(lambda$ which is upper-left matrix element of monodromy matrix built from the Bazhanov-Stroganov $L$-operators. The formulas for the eigenvectors are derived using iterative procedure by Kharchev and Lebedev and given in terms of $w_p(s$-function which is a root of unity analogue of $Gamma_q$-function.

  11. Laplacian eigenvectors of graphs Perron-Frobenius and Faber-Krahn type theorems

    CERN Document Server

    Biyikoğu, Türker; Stadler, Peter F

    2007-01-01

    Eigenvectors of graph Laplacians have not, to date, been the subject of expository articles and thus they may seem a surprising topic for a book. The authors propose two motivations for this new LNM volume: (1) There are fascinating subtle differences between the properties of solutions of Schrödinger equations on manifolds on the one hand, and their discrete analogs on graphs. (2) "Geometric" properties of (cost) functions defined on the vertex sets of graphs are of practical interest for heuristic optimization algorithms. The observation that the cost functions of quite a few of the well-studied combinatorial optimization problems are eigenvectors of associated graph Laplacians has prompted the investigation of such eigenvectors. The volume investigates the structure of eigenvectors and looks at the number of their sign graphs ("nodal domains"), Perron components, graphs with extremal properties with respect to eigenvectors. The Rayleigh quotient and rearrangement of graphs form the main methodology.

  12. Use of eigenvectors in understanding and correcting storage ring orbits

    International Nuclear Information System (INIS)

    Friedman, A.; Bozoki, E.

    1994-01-01

    The response matrix A is defined by the equation X=AΘ, where Θ is the kick vector and X is the resulting orbit vector. Since A is not necessarily a symmetric or even a square matrix we symmetrize it by using A T A. Then we find the eigenvalues and eigenvectors of this A T A matrix. The physical interpretation of the eigenvectors for circular machines is discussed. The task of the orbit correction is to find the kick vector Θ for a given measured orbit vector X. We are presenting a method, in which the kick vector is expressed as linear combination of the eigenvectors. An additional advantage of this method is that it yields the smallest possible kick vector to correct the orbit. We will illustrate the application of the method to the NSLS X-ray and UV storage rings and the resulting measurements. It will be evident, that the accuracy of this method allows the combination of the global orbit correction and local optimization of the orbit for beam lines and insertion devices. The eigenvector decomposition can also be used for optimizing kick vectors, taking advantage of the fact that eigenvectors with corresponding small eigenvalue generate negligible orbit changes. Thus, one can reduce a kick vector calculated by any other correction method and still stay within the tolerance for orbit correction. The use of eigenvectors in accurately measuring the response matrix and the use of the eigenvalue decomposition orbit correction algorithm in digital feedback is discussed. (orig.)

  13. Eigenvectors phase correction in inverse modal problem

    Science.gov (United States)

    Qiao, Guandong; Rahmatalla, Salam

    2017-12-01

    The solution of the inverse modal problem for the spatial parameters of mechanical and structural systems is heavily dependent on the quality of the modal parameters obtained from the experiments. While experimental and environmental noises will always exist during modal testing, the resulting modal parameters are expected to be corrupted with different levels of noise. A novel methodology is presented in this work to mitigate the errors in the eigenvectors when solving the inverse modal problem for the spatial parameters. The phases of the eigenvector component were utilized as design variables within an optimization problem that minimizes the difference between the calculated and experimental transfer functions. The equation of motion in terms of the modal and spatial parameters was used as a constraint in the optimization problem. Constraints that reserve the positive and semi-positive definiteness and the inter-connectivity of the spatial matrices were implemented using semi-definite programming. Numerical examples utilizing noisy eigenvectors with augmented Gaussian white noise of 1%, 5%, and 10% were used to demonstrate the efficacy of the proposed method. The results showed that the proposed method is superior when compared with a known method in the literature.

  14. Normalised flood losses in Europe: 1970-2006

    Science.gov (United States)

    Barredo, J. I.

    2009-02-01

    This paper presents an assessment of normalised flood losses in Europe for the period 1970-2006. Normalisation provides an estimate of the losses that would occur if the floods from the past take place under current societal conditions. Economic losses from floods are the result of both societal and climatological factors. Failing to adjust for time-variant socio-economic factors produces loss amounts that are not directly comparable over time, but rather show an ever-growing trend for purely socio-economic reasons. This study has used available information on flood losses from the Emergency Events Database (EM-DAT) and the Natural Hazards Assessment Network (NATHAN). Following the conceptual approach of previous studies, we normalised flood losses by considering the effects of changes in population, wealth, and inflation at the country level. Furthermore, we removed inter-country price differences by adjusting the losses for purchasing power parities (PPP). We assessed normalised flood losses in 31 European countries. These include the member states of the European Union, Norway, Switzerland, Croatia, and the Former Yugoslav Republic of Macedonia. Results show no detectable sign of human-induced climate change in normalised flood losses in Europe. The observed increase in the original flood losses is mostly driven by societal factors.

  15. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    Science.gov (United States)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  16. The best of both worlds: Phylogenetic eigenvector regression and mapping

    Directory of Open Access Journals (Sweden)

    José Alexandre Felizola Diniz Filho

    2015-09-01

    Full Text Available Eigenfunction analyses have been widely used to model patterns of autocorrelation in time, space and phylogeny. In a phylogenetic context, Diniz-Filho et al. (1998 proposed what they called Phylogenetic Eigenvector Regression (PVR, in which pairwise phylogenetic distances among species are submitted to a Principal Coordinate Analysis, and eigenvectors are then used as explanatory variables in regression, correlation or ANOVAs. More recently, a new approach called Phylogenetic Eigenvector Mapping (PEM was proposed, with the main advantage of explicitly incorporating a model-based warping in phylogenetic distance in which an Ornstein-Uhlenbeck (O-U process is fitted to data before eigenvector extraction. Here we compared PVR and PEM in respect to estimated phylogenetic signal, correlated evolution under alternative evolutionary models and phylogenetic imputation, using simulated data. Despite similarity between the two approaches, PEM has a slightly higher prediction ability and is more general than the original PVR. Even so, in a conceptual sense, PEM may provide a technique in the best of both worlds, combining the flexibility of data-driven and empirical eigenfunction analyses and the sounding insights provided by evolutionary models well known in comparative analyses.

  17. A subspace preconditioning algorithm for eigenvector/eigenvalue computation

    Energy Technology Data Exchange (ETDEWEB)

    Bramble, J.H.; Knyazev, A.V.; Pasciak, J.E.

    1996-12-31

    We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigen-spaces of a symmetric positive definite matrix. In our applications, the dimension of a matrix is large and the cost of its inverting is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning. Estimates will be provided which show that the preconditioned method converges linearly and uniformly in the matrix dimension when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors.

  18. Eigenvectors determination of the ribosome dynamics model during mRNA translation using the Kleene Star algorithm

    Science.gov (United States)

    Ernawati; Carnia, E.; Supriatna, A. K.

    2018-03-01

    Eigenvalues and eigenvectors in max-plus algebra have the same important role as eigenvalues and eigenvectors in conventional algebra. In max-plus algebra, eigenvalues and eigenvectors are useful for knowing dynamics of the system such as in train system scheduling, scheduling production systems and scheduling learning activities in moving classes. In the translation of proteins in which the ribosome move uni-directionally along the mRNA strand to recruit the amino acids that make up the protein, eigenvalues and eigenvectors are used to calculate protein production rates and density of ribosomes on the mRNA. Based on this, it is important to examine the eigenvalues and eigenvectors in the process of protein translation. In this paper an eigenvector formula is given for a ribosome dynamics during mRNA translation by using the Kleene star algorithm in which the resulting eigenvector formula is simpler and easier to apply to the system than that introduced elsewhere. This paper also discusses the properties of the matrix {B}λ \\otimes n of model. Among the important properties, it always has the same elements in the first column for n = 1, 2,… if the eigenvalue is the time of initiation, λ = τin , and the column is the eigenvector of the model corresponding to λ.

  19. A spatial-spectral approach for deriving high signal quality eigenvectors for remote sensing image transformations

    DEFF Research Database (Denmark)

    Rogge, Derek; Bachmann, Martin; Rivard, Benoit

    2014-01-01

    Spectral decorrelation (transformations) methods have long been used in remote sensing. Transformation of the image data onto eigenvectors that comprise physically meaningful spectral properties (signal) can be used to reduce the dimensionality of hyperspectral images as the number of spectrally...... distinct signal sources composing a given hyperspectral scene is generally much less than the number of spectral bands. Determining eigenvectors dominated by signal variance as opposed to noise is a difficult task. Problems also arise in using these transformations on large images, multiple flight...... and spectral subsampling to the data, which is accomplished by deriving a limited set of eigenvectors for spatially contiguous subsets. These subset eigenvectors are compiled together to form a new noise reduced data set, which is subsequently used to derive a set of global orthogonal eigenvectors. Data from...

  20. A Markov chain representation of the normalized Perron–Frobenius eigenvector

    OpenAIRE

    Cerf, Raphaël; Dalmau, Joseba

    2017-01-01

    We consider the problem of finding the Perron–Frobenius eigenvector of a primitive matrix. Dividing each of the rows of the matrix by the sum of the elements in the row, the resulting new matrix is stochastic. We give a formula for the normalized Perron–Frobenius eigenvector of the original matrix, in terms of a realization of the Markov chain defined by the associated stochastic matrix. This formula is a generalization of the classical formula for the invariant probability measure of a Marko...

  1. RELATIVISTIC MAGNETOHYDRODYNAMICS: RENORMALIZED EIGENVECTORS AND FULL WAVE DECOMPOSITION RIEMANN SOLVER

    International Nuclear Information System (INIS)

    Anton, Luis; MartI, Jose M; Ibanez, Jose M; Aloy, Miguel A.; Mimica, Petar; Miralles, Juan A.

    2010-01-01

    We obtain renormalized sets of right and left eigenvectors of the flux vector Jacobians of the relativistic MHD equations, which are regular and span a complete basis in any physical state including degenerate ones. The renormalization procedure relies on the characterization of the degeneracy types in terms of the normal and tangential components of the magnetic field to the wave front in the fluid rest frame. Proper expressions of the renormalized eigenvectors in conserved variables are obtained through the corresponding matrix transformations. Our work completes previous analysis that present different sets of right eigenvectors for non-degenerate and degenerate states, and can be seen as a relativistic generalization of earlier work performed in classical MHD. Based on the full wave decomposition (FWD) provided by the renormalized set of eigenvectors in conserved variables, we have also developed a linearized (Roe-type) Riemann solver. Extensive testing against one- and two-dimensional standard numerical problems allows us to conclude that our solver is very robust. When compared with a family of simpler solvers that avoid the knowledge of the full characteristic structure of the equations in the computation of the numerical fluxes, our solver turns out to be less diffusive than HLL and HLLC, and comparable in accuracy to the HLLD solver. The amount of operations needed by the FWD solver makes it less efficient computationally than those of the HLL family in one-dimensional problems. However, its relative efficiency increases in multidimensional simulations.

  2. Chirality correlation within Dirac eigenvectors from domain wall fermions

    International Nuclear Information System (INIS)

    Blum, T.; Christ, N.; Cristian, C.; Liao, X.; Liu, G.; Mawhinney, R.; Wu, L.; Zhestkov, Y.; Dawson, C.

    2002-01-01

    In the dilute instanton gas model of the QCD vacuum, one expects a strong spatial correlation between chirality and the maxima of the Dirac eigenvectors with small eigenvalues. Following Horvath et al. we examine this question using lattice gauge theory within the quenched approximation. We extend the work of those authors by using weaker coupling, β=6.0, larger lattices, 16 4 , and an improved fermion formulation, domain wall fermions. In contrast with this earlier work, we find a striking correlation between the magnitudes of the chirality density, |ψ † (x)γ 5 ψ(x)|, and the normal density, ψ † (x)ψ(x), for the low-lying Dirac eigenvectors

  3. Supervised Object Class Colour Normalisation

    DEFF Research Database (Denmark)

    Riabchenko, Ekatarina; Lankinen, Jukka; Buch, Anders Glent

    2013-01-01

    . In this work, we develop a such colour normalisation technique, where true colours are not important per se but where examples of same classes have photometrically consistent appearance. This is achieved by supervised estimation of a class specic canonical colour space where the examples have minimal variation......Colour is an important cue in many applications of computer vision and image processing, but robust usage often requires estimation of the unknown illuminant colour. Usually, to obtain images invariant to the illumination conditions under which they were taken, color normalisation is used...... in their colours. We demonstrate the effectiveness of our method with qualitative and quantitative examples from the Caltech-101 data set and a real application of 3D pose estimation for robot grasping....

  4. Distinct types of eigenvector localization in networks

    Science.gov (United States)

    Pastor-Satorras, Romualdo; Castellano, Claudio

    2016-01-01

    The spectral properties of the adjacency matrix provide a trove of information about the structure and function of complex networks. In particular, the largest eigenvalue and its associated principal eigenvector are crucial in the understanding of nodes’ centrality and the unfolding of dynamical processes. Here we show that two distinct types of localization of the principal eigenvector may occur in heterogeneous networks. For synthetic networks with degree distribution P(q) ~ q-γ, localization occurs on the largest hub if γ > 5/2 for γ < 5/2 a new type of localization arises on a mesoscopic subgraph associated with the shell with the largest index in the K-core decomposition. Similar evidence for the existence of distinct localization modes is found in the analysis of real-world networks. Our results open a new perspective on dynamical processes on networks and on a recently proposed alternative measure of node centrality based on the non-backtracking matrix.

  5. A teaching proposal for the study of Eigenvectors and Eigenvalues

    Directory of Open Access Journals (Sweden)

    María José Beltrán Meneu

    2017-03-01

    Full Text Available In this work, we present a teaching proposal which emphasizes on visualization and physical applications in the study of eigenvectors and eigenvalues. These concepts are introduced using the notion of the moment of inertia of a rigid body and the GeoGebra software. The proposal was motivated after observing students’ difficulties when treating eigenvectors and eigenvalues from a geometric point of view. It was designed following a particular sequence of activities with the schema: exploration, introduction of concepts, structuring of knowledge and application, and considering the three worlds of mathematical thinking provided by Tall: embodied, symbolic and formal.

  6. Eigenvector centrality mapping for analyzing connectivity patterns in fMRI data of the human brain.

    Directory of Open Access Journals (Sweden)

    Gabriele Lohmann

    Full Text Available Functional magnetic resonance data acquired in a task-absent condition ("resting state" require new data analysis techniques that do not depend on an activation model. In this work, we introduce an alternative assumption- and parameter-free method based on a particular form of node centrality called eigenvector centrality. Eigenvector centrality attributes a value to each voxel in the brain such that a voxel receives a large value if it is strongly correlated with many other nodes that are themselves central within the network. Google's PageRank algorithm is a variant of eigenvector centrality. Thus far, other centrality measures - in particular "betweenness centrality" - have been applied to fMRI data using a pre-selected set of nodes consisting of several hundred elements. Eigenvector centrality is computationally much more efficient than betweenness centrality and does not require thresholding of similarity values so that it can be applied to thousands of voxels in a region of interest covering the entire cerebrum which would have been infeasible using betweenness centrality. Eigenvector centrality can be used on a variety of different similarity metrics. Here, we present applications based on linear correlations and on spectral coherences between fMRI times series. This latter approach allows us to draw conclusions of connectivity patterns in different spectral bands. We apply this method to fMRI data in task-absent conditions where subjects were in states of hunger or satiety. We show that eigenvector centrality is modulated by the state that the subjects were in. Our analyses demonstrate that eigenvector centrality is a computationally efficient tool for capturing intrinsic neural architecture on a voxel-wise level.

  7. Motivating the Concept of Eigenvectors via Cryptography

    Science.gov (United States)

    Siap, Irfan

    2008-01-01

    New methods of teaching linear algebra in the undergraduate curriculum have attracted much interest lately. Most of this work is focused on evaluating and discussing the integration of special computer software into the Linear Algebra curriculum. In this article, I discuss my approach on introducing the concept of eigenvectors and eigenvalues,…

  8. Nuclear power 1984: Progressive normalisation

    International Nuclear Information System (INIS)

    Popp, M.

    1984-01-01

    The peaceful use of nuclear power is being integrated into the overall concept of a safe long-term power supply in West Germany. The progress of normalisation is shown particularly in the takeover of all stations of the nuclear fuel circuit by the economy, with the exception of the final storage of radioactive waste, which is the responsibility of the West German Government. Normalisation also means the withdrawal of the state from financing projects after completion of the two prototypes SNR-300 and THTR-300 and the German uranium enrichment plant. The state will, however, support future research and development projects in the nuclear field. The expansion of nuclear power capacity is at present being slowed down by the state of the economy, i.e. only nuclear power projects being built are proceeding. (orig./HP) [de

  9. Correlation of errors in the Monte Carlo fission source and the fission matrix fundamental-mode eigenvector

    International Nuclear Information System (INIS)

    Dufek, Jan; Holst, Gustaf

    2016-01-01

    Highlights: • Errors in the fission matrix eigenvector and fission source are correlated. • The error correlations depend on coarseness of the spatial mesh. • The error correlations are negligible when the mesh is very fine. - Abstract: Previous studies raised a question about the level of a possible correlation of errors in the cumulative Monte Carlo fission source and the fundamental-mode eigenvector of the fission matrix. A number of new methods tally the fission matrix during the actual Monte Carlo criticality calculation, and use its fundamental-mode eigenvector for various tasks. The methods assume the fission matrix eigenvector is a better representation of the fission source distribution than the actual Monte Carlo fission source, although the fission matrix and its eigenvectors do contain statistical and other errors. A recent study showed that the eigenvector could be used for an unbiased estimation of errors in the cumulative fission source if the errors in the eigenvector and the cumulative fission source were not correlated. Here we present new numerical study results that answer the question about the level of the possible error correlation. The results may be of importance to all methods that use the fission matrix. New numerical tests show that the error correlation is present at a level which strongly depends on properties of the spatial mesh used for tallying the fission matrix. The error correlation is relatively strong when the mesh is coarse, while the correlation weakens as the mesh gets finer. We suggest that the coarseness of the mesh is measured in terms of the value of the largest element in the tallied fission matrix as that way accounts for the mesh as well as system properties. In our test simulations, we observe only negligible error correlations when the value of the largest element in the fission matrix is about 0.1. Relatively strong error correlations appear when the value of the largest element in the fission matrix raises

  10. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, Musa

    1998-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods

  11. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.

    1997-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods. (author)

  12. Simple eigenvectors of unbounded operators of the type “normal plus compact”

    Directory of Open Access Journals (Sweden)

    Michael Gil'

    2015-01-01

    Full Text Available The paper deals with operators of the form \\(A=S+B\\, where \\(B\\ is a compact operator in a Hilbert space \\(H\\ and \\(S\\ is an unbounded normal one in \\(H\\, having a compact resolvent. We consider approximations of the eigenvectors of \\(A\\, corresponding to simple eigenvalues by the eigenvectors of the operators \\(A_n=S+B_n\\ (\\(n=1,2, \\ldots\\, where \\(B_n\\ is an \\(n\\-dimensional operator. In addition, we obtain the error estimate of the approximation.

  13. Acceleration of criticality analysis solution convergence by matrix eigenvector for a system with weak neutron interaction

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasushi; Takada, Tomoyuki; Kuroishi, Takeshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kadotani, Hiroyuki [Shizuoka Sangyo Univ., Iwata, Shizuoka (Japan)

    2003-03-01

    In the case of Monte Carlo calculation to obtain a neutron multiplication factor for a system of weak neutron interaction, there might be some problems concerning convergence of the solution. Concerning this difficulty in the computer code calculations, theoretical derivation was made from the general neutron transport equation and consideration was given for acceleration of solution convergence by using the matrix eigenvector in this report. Accordingly, matrix eigenvector calculation scheme was incorporated together with procedure to make acceleration of convergence into the continuous energy Monte Carlo code MCNP. Furthermore, effectiveness of acceleration of solution convergence by matrix eigenvector was ascertained with the results obtained by applying to the two OECD/NEA criticality analysis benchmark problems. (author)

  14. Attitudes to Normalisation and Inclusive Education

    Science.gov (United States)

    Sanagi, Tomomi

    2016-01-01

    The purpose of this paper was to clarify the features of teachers' image on normalisation and inclusive education. The participants of the study were both mainstream teachers and special teachers. One hundred and thirty-eight questionnaires were analysed. (1) Teachers completed the questionnaire of SD (semantic differential) images on…

  15. Queer Literature in Spain: Pathways to Normalisation

    Directory of Open Access Journals (Sweden)

    Martínez-Expósito, Alfredo

    2013-06-01

    Full Text Available More than any other, the idea of normalisation has provoked deep divisions within queer activism both at a philosophical and also at a political level. At the root of these divisions lies the irreconcilable divergence between an agenda for social change, which advocates the need for society to accept all sexual behaviours and identities as normal, and an approach of radical resistance against some social structures that can only offer a bourgeois and conformist normalisation. Literary fiction and homo-gay-queer themed cinema have explored these and other sides of the idea of normalisation and have thus contributed to the taking of decisive steps: from the poetics of transgression towards the poetics of celebration and social transformation. In this paper we examine two of these literary normalisation strategies: the use of humour and the proliferation of discursive perspectives both in the cinema and in narrative fiction during the last decades.Más quizá que ninguna otra, la idea de normalización ha provocado profundas divisiones en el seno del activismo queer, tanto a nivel filosófico/conceptual como a nivel de estrategia política. En el origen de estas divisiones se encuentra la irreconciliable divergencia entre una agenda de cambio social, que propugna la necesidad de que la sociedad acepte como normales todas las conductas e identidades sexuales, y un planteamiento de resistencia radical ante unas estructuras sociales que sólo pueden ofrecer una normalización burguesa y acomodaticia. La literatura de ficción y el cine de temática homo-gay-queer han explorado éstas y otras facetas de la idea de normalización, contribuyendo así a dar pasos decisivos desde las poéticas de la transgresión hacia poéticas de la celebración y transformación social. En esta presentación se exploran dos de estas estrategias de normalización literaria: el uso del humor y la proliferación de perspectivas discursivas en el cine y la narrativa de

  16. Computing the eigenvalues and eigenvectors of a fuzzy matrix

    Directory of Open Access Journals (Sweden)

    A. Kumar

    2012-08-01

    Full Text Available Computation of fuzzy eigenvalues and fuzzy eigenvectors of a fuzzy matrix is a challenging problem. Determining the maximal and minimal symmetric solution can help to find the eigenvalues. So, we try to compute these eigenvalues by determining the maximal and minimal symmetric solution of the fully fuzzy linear system $widetilde{A}widetilde{X}= widetilde{lambda} widetilde{X}.$

  17. Random forest meteorological normalisation models for Swiss PM10 trend analysis

    Science.gov (United States)

    Grange, Stuart K.; Carslaw, David C.; Lewis, Alastair C.; Boleti, Eirini; Hueglin, Christoph

    2018-05-01

    Meteorological normalisation is a technique which accounts for changes in meteorology over time in an air quality time series. Controlling for such changes helps support robust trend analysis because there is more certainty that the observed trends are due to changes in emissions or chemistry, not changes in meteorology. Predictive random forest models (RF; a decision tree machine learning technique) were grown for 31 air quality monitoring sites in Switzerland using surface meteorological, synoptic scale, boundary layer height, and time variables to explain daily PM10 concentrations. The RF models were used to calculate meteorologically normalised trends which were formally tested and evaluated using the Theil-Sen estimator. Between 1997 and 2016, significantly decreasing normalised PM10 trends ranged between -0.09 and -1.16 µg m-3 yr-1 with urban traffic sites experiencing the greatest mean decrease in PM10 concentrations at -0.77 µg m-3 yr-1. Similar magnitudes have been reported for normalised PM10 trends for earlier time periods in Switzerland which indicates PM10 concentrations are continuing to decrease at similar rates as in the past. The ability for RF models to be interpreted was leveraged using partial dependence plots to explain the observed trends and relevant physical and chemical processes influencing PM10 concentrations. Notably, two regimes were suggested by the models which cause elevated PM10 concentrations in Switzerland: one related to poor dispersion conditions and a second resulting from high rates of secondary PM generation in deep, photochemically active boundary layers. The RF meteorological normalisation process was found to be robust, user friendly and simple to implement, and readily interpretable which suggests the technique could be useful in many air quality exploratory data analysis situations.

  18. Those Do What? Connecting Eigenvectors and Eigenvalues to the Rest of Linear Algebra: Using Visual Enhancements to Help Students Connect Eigenvectors to the Rest of Linear Algebra

    Science.gov (United States)

    Nyman, Melvin A.; Lapp, Douglas A.; St. John, Dennis; Berry, John S.

    2010-01-01

    This paper discusses student difficulties in grasping concepts from Linear Algebra--in particular, the connection of eigenvalues and eigenvectors to other important topics in linear algebra. Based on our prior observations from student interviews, we propose technology-enhanced instructional approaches that might positively impact student…

  19. Introducing carrying capacity-based normalisation in LCA: framework and development of references at midpoint level

    DEFF Research Database (Denmark)

    Bjørn, Anders; Hauschild, Michael Zwicky

    2015-01-01

    carrying capacity-based normalisation references. The purpose of this article is to present a framework for normalisation against carrying capacity-based references and to develop average normalisation references (NR) for Europe and the world for all those midpoint impact categories commonly included....... A literature review was carried out to identify scientifically sound thresholds for each impact category. Carrying capacities were then calculated from these thresholds and expressed in metrics identical to midpoint indicators giving priority to those recommended by ILCD. NR was expressed as the carrying...... ozone formation and soil quality were found to exceed carrying capacities several times.The developed carrying capacity-based normalisation references offer relevant supplementary reference information to the currently applied references based on society’s background interventions by supporting...

  20. Unveiling the significance of eigenvectors in diffusing non-Hermitian matrices by identifying the underlying Burgers dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Burda, Zdzislaw, E-mail: zdzislaw.burda@agh.edu.pl [AGH University of Science and Technology, Faculty of Physics and Applied Computer Science, al. Mickiewicza 30, PL-30059 Kraków (Poland); Grela, Jacek, E-mail: jacekgrela@gmail.com [M. Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Centre, Jagiellonian University, PL-30348 Kraków (Poland); Nowak, Maciej A., E-mail: nowak@th.if.uj.edu.pl [M. Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Centre, Jagiellonian University, PL-30348 Kraków (Poland); Tarnowski, Wojciech, E-mail: wojciech.tarnowski@uj.edu.pl [M. Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Centre, Jagiellonian University, PL-30348 Kraków (Poland); Warchoł, Piotr, E-mail: piotr.warchol@uj.edu.pl [M. Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Centre, Jagiellonian University, PL-30348 Kraków (Poland)

    2015-08-15

    Following our recent letter, we study in detail an entry-wise diffusion of non-hermitian complex matrices. We obtain an exact partial differential equation (valid for any matrix size N and arbitrary initial conditions) for evolution of the averaged extended characteristic polynomial. The logarithm of this polynomial has an interpretation of a potential which generates a Burgers dynamics in quaternionic space. The dynamics of the ensemble in the large N limit is completely determined by the coevolution of the spectral density and a certain eigenvector correlation function. This coevolution is best visible in an electrostatic potential of a quaternionic argument built of two complex variables, the first of which governs standard spectral properties while the second unravels the hidden dynamics of eigenvector correlation function. We obtain general formulas for the spectral density and the eigenvector correlation function for large N and for any initial conditions. We exemplify our studies by solving three examples, and we verify the analytic form of our solutions with numerical simulations.

  1. Normalised quantitative polymerase chain reaction for diagnosis of tuberculosis-associated uveitis.

    Science.gov (United States)

    Barik, Manas Ranjan; Rath, Soveeta; Modi, Rohit; Rana, Rajkishori; Reddy, Mamatha M; Basu, Soumyava

    2018-05-01

    Polymerase chain reaction (PCR)-based diagnosis of tuberculosis-associated uveitis (TBU) in TB-endemic countries is challenging due to likelihood of latent mycobacterial infection in both immune and non-immune cells. In this study, we investigated normalised quantitative PCR (nqPCR) in ocular fluids (aqueous/vitreous) for diagnosis of TBU in a TB-endemic population. Mycobacterial copy numbers (mpb64 gene) were normalised to host genome copy numbers (RNAse P RNA component H1 [RPPH1] gene) in TBU (n = 16) and control (n = 13) samples (discovery cohort). The mpb64:RPPH1 ratios (normalised value) from each TBU and control sample were tested against the current reference standard i.e. clinically-diagnosed TBU, to generate Receiver Operating Characteristic (ROC) curves. The optimum cut-off value of mpb64:RPPH1 ratio (0.011) for diagnosing TBU was identified from the highest Youden index. This cut-off value was then tested in a different cohort of TBU and controls (validation cohort, 20 cases and 18 controls), where it yielded specificity, sensitivity and diagnostic accuracy of 94.4%, 85.0%, and 89.4% respectively. The above values for conventional quantitative PCR (≥1 copy of mpb64 per reaction) were 61.1%, 90.0%, and 74.3% respectively. Normalisation markedly improved the specificity and diagnostic accuracy of quantitative PCR for diagnosis of TBU. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Normalising convenience food?

    DEFF Research Database (Denmark)

    Halkier, Bente

    2017-01-01

    The construction of convenience food as a social and cultural category for food provisioning, cooking and eating seems to slide between or across understandings of what is considered “proper food” in the existing discourses in everyday life and media. This article sheds light upon some...... of the social and cultural normativities around convenience food by describing the ways in which convenience food forms part of the daily life of young Danes. Theoretically, the article is based on a practice theoretical perspective. Empirically, the article builds upon a qualitative research project on food...... habits among Danes aged 20–25. The article presents two types of empirical patterns. The first types of patterns are the degree to which and the different ways in which convenience food is normalised to use among the young Danes. The second types of patterns are the normative places of convenient food...

  3. An exploration of diffusion tensor eigenvector variability within human calf muscles.

    Science.gov (United States)

    Rockel, Conrad; Noseworthy, Michael D

    2016-01-01

    To explore the effect of diffusion tensor imaging (DTI) acquisition parameters on principal and minor eigenvector stability within human lower leg skeletal muscles. Lower leg muscles were evaluated in seven healthy subjects at 3T using an 8-channel transmit/receive coil. Diffusion-encoding was performed with nine signal averages (NSA) using 6, 15, and 25 directions (NDD). Individual DTI volumes were combined into aggregate volumes of 3, 2, and 1 NSA according to number of directions. Tensor eigenvalues (λ1 , λ2 , λ3 ), eigenvectors (ε1 , ε2 , ε3 ), and DTI metrics (fractional anisotropy [FA] and mean diffusivity [MD]) were calculated for each combination of NSA and NDD. Spatial maps of signal-to-noise ratio (SNR), λ3 :λ2 ratio, and zenith angle were also calculated for region of interest (ROI) analysis of vector orientation consistency. ε1 variability was only moderately related to ε2 variability (r = 0.4045). Variation of ε1 was affected by NDD, not NSA (P < 0.0002), while variation of ε2 was affected by NSA, not NDD (P < 0.0003). In terms of tensor shape, vector variability was weakly related to FA (ε1 :r = -0.1854, ε2 : ns), but had a stronger relation to the λ3 :λ2 ratio (ε1 :r = -0.5221, ε2 :r = -0.1771). Vector variability was also weakly related to SNR (ε1 :r = -0.2873, ε2 :r = -0.3483). Zenith angle was found to be strongly associated with variability of ε1 (r = 0.8048) but only weakly with that of ε2 (r = 0.2135). The second eigenvector (ε2 ) displayed higher directional variability relative to ε1 , and was only marginally affected by experimental conditions that impacted ε1 variability. © 2015 Wiley Periodicals, Inc.

  4. Rules of Normalisation and their Importance for Interpretation of Systems of Optimal Taxation

    DEFF Research Database (Denmark)

    Munk, Knud Jørgen

    representation of the general equilibrium conditions the rules of normalisation in standard optimal tax models. This allows us to provide an intuitive explanation of what determines the optimal tax system. Finally, we review a number of examples where lack of precision with respect to normalisation in otherwise...... important contributions to the literature on optimal taxation has given rise to misinterpretations of of analytical results....

  5. On the Eigenvalues and Eigenvectors of Block Triangular Preconditioned Block Matrices

    KAUST Repository

    Pestana, Jennifer

    2014-01-01

    Block lower triangular matrices and block upper triangular matrices are popular preconditioners for 2×2 block matrices. In this note we show that a block lower triangular preconditioner gives the same spectrum as a block upper triangular preconditioner and that the eigenvectors of the two preconditioned matrices are related. © 2014 Society for Industrial and Applied Mathematics.

  6. On the raising and lowering difference operators for eigenvectors of the finite Fourier transform

    International Nuclear Information System (INIS)

    Atakishiyeva, M K; Atakishiyev, N M

    2015-01-01

    We construct explicit forms of raising and lowering difference operators that govern eigenvectors of the finite (discrete) Fourier transform. Some of the algebraic properties of these operators are also examined. (paper)

  7. Normalisation and weighting in life cycle assessment: quo vadis?

    DEFF Research Database (Denmark)

    Pizzol, Massimo; Laurent, Alexis; Sala, Serenella

    2017-01-01

    Purpose: Building on the rhetoric question “quo vadis?” (literally “Where are you going?”), this article critically investigates the state of the art of normalisation and weighting approaches within life cycle assessment. It aims at identifying purposes, current practises, pros and cons, as well...

  8. Quasars in the 4D Eigenvector 1 Context: a stroll down memory lane

    Science.gov (United States)

    Sulentic, Jack; Marziani, Paola

    2015-10-01

    Recently some pessimism has been expressed about our lack of progress in understanding quasars over more than fifty year since their discovery. It is worthwhile to look back at some of the progress that has been made - but still lies under the radar - perhaps because few people are working on optical/UV spectroscopy in this field. Great advances in understanding quasar phenomenology have emerged using eigenvector techniques. The 4D eigenvector 1 context provides a surrogate H-R Diagram for quasars with a source main sequence driven by Eddington ratio convolved with line-of-sight orientation. Appreciating the striking differences between quasars at opposite ends of the main sequence (so-called population A and B sources) opens the door towards a unified model of quasar physics, geometry and kinematics. We present a review of some of the progress that has been made over the past 15 years, and point out unsolved issues.

  9. Quasars in the 4D Eigenvector 1 Context: a stroll down memory lane

    Directory of Open Access Journals (Sweden)

    Jack W. Sulentic

    2015-10-01

    Full Text Available Recently some pessimism has been expressed about our lack of progress in understanding quasars over more than fifty year since their discovery. It is worthwhile to look back at some of the progress that has been made – but still lies under the radar – perhaps because few people are working on optical/UV spectroscopy in this field. Great advances in understanding quasar phenomenology have emerged using eigenvector techniques. The 4D eigenvector 1 context provides a surrogate H-R Diagram for quasars with a source main sequence driven by Eddington ratio convolved with line-of-sight orientation. Appreciating the striking differences between quasars at opposite ends of the main sequence (so-called population A and B sources opens the door towards a unified model of quasar physics, geometry and kinematics. We present a review of some of the progress that has been made over the past 15 years, and point out unsolved issues.

  10. Quasars in the 4D eigenvector 1 context: a stroll down memory lane

    International Nuclear Information System (INIS)

    Sulentic, Jack W.; Marziani, Paola

    2015-01-01

    Recently some pessimism has been expressed about our lack of progress in understanding quasars over the 50+ year since their discovery (Antonucci, 2013). It is worthwhile to look back at some of the progress that has been made—but still lies under the radar—perhaps because few people are working on optical/UV spectroscopy in this field. Great advances in understanding quasar phenomenology have emerged using eigenvector techniques. The 4D eigenvector 1 context provides a surrogate H-R Diagram for quasars with a source main sequence driven by Eddington ratio convolved with line-of-sight orientation. Appreciating the striking differences between quasars at opposite ends of the main sequence (so-called population A and B sources) opens the door toward a unified model of quasar physics, geometry and kinematics. We present a review of some of the progress that has been made over the past 15 years, and point out unsolved issues.

  11. Quasars in the 4D eigenvector 1 context: a stroll down memory lane

    Energy Technology Data Exchange (ETDEWEB)

    Sulentic, Jack W. [Instituto de Astrofísica de Andalucía-Consejo Superior de Investigaciones Científicas, Granada (Spain); Marziani, Paola, E-mail: paola.marziani@oapd.inaf.it [Istituto Nazionale di Astrofisica, Osservatorio Astronomico di Padova, Padova (Italy)

    2015-10-13

    Recently some pessimism has been expressed about our lack of progress in understanding quasars over the 50+ year since their discovery (Antonucci, 2013). It is worthwhile to look back at some of the progress that has been made—but still lies under the radar—perhaps because few people are working on optical/UV spectroscopy in this field. Great advances in understanding quasar phenomenology have emerged using eigenvector techniques. The 4D eigenvector 1 context provides a surrogate H-R Diagram for quasars with a source main sequence driven by Eddington ratio convolved with line-of-sight orientation. Appreciating the striking differences between quasars at opposite ends of the main sequence (so-called population A and B sources) opens the door toward a unified model of quasar physics, geometry and kinematics. We present a review of some of the progress that has been made over the past 15 years, and point out unsolved issues.

  12. Guidelines for normalising Early Modern English corpora: Decisions and justifications

    Directory of Open Access Journals (Sweden)

    Archer Dawn

    2015-03-01

    Full Text Available Corpora of Early Modern English have been collected and released for research for a number of years. With large scale digitisation activities gathering pace in the last decade, much more historical textual data is now available for research on numerous topics including historical linguistics and conceptual history. We summarise previous research which has shown that it is necessary to map historical spelling variants to modern equivalents in order to successfully apply natural language processing and corpus linguistics methods. Manual and semiautomatic methods have been devised to support this normalisation and standardisation process. We argue that it is important to develop a linguistically meaningful rationale to achieve good results from this process. In order to do so, we propose a number of guidelines for normalising corpora and show how these guidelines have been applied in the Corpus of English Dialogues.

  13. Geometric and topological characterization of porous media: insights from eigenvector centrality

    Science.gov (United States)

    Jimenez-Martinez, J.; Negre, C.

    2017-12-01

    Solving flow and transport through complex geometries such as porous media involves an extreme computational cost. Simplifications such as pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models have the ability to preserve the connectivity of the medium. However, they have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Network theory approaches, where the complex network is conceptualized like a graph, can help to simplify and better understand fluid dynamics and transport in porous media. To address this issue, we propose a method based on eigenvector centrality. It has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction which allows considering the flow and transport anisotropy in porous media. The model predictions are compared with millifluidic transport experiments, showing that this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. Entropy computed from the eigenvector centrality probability distribution is proposed as an indicator of the "mixing capacity" of the system.

  14. A novel approach to signal normalisation in atmospheric pressure ionisation mass spectrometry.

    Science.gov (United States)

    Vogeser, Michael; Kirchhoff, Fabian; Geyer, Roland

    2012-07-01

    The aim of our study was to test an alternative principle of signal normalisation in LC-MS/MS. During analyses, post column infusion of the target analyte is done via a T-piece, generating an "area under the analyte peak" (AUP). The ratio of peak area to AUP is assessed as assay response. Acceptable analytical performance of this principle was found for an exemplary analyte. Post-column infusion may allow normalisation of ion suppression not requiring any additional standard compound. This approach can be useful in situations where no appropriate compound is available for classical internal standardisation. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. A comparison of parametric and nonparametric methods for normalising cDNA microarray data.

    Science.gov (United States)

    Khondoker, Mizanur R; Glasbey, Chris A; Worton, Bruce J

    2007-12-01

    Normalisation is an essential first step in the analysis of most cDNA microarray data, to correct for effects arising from imperfections in the technology. Loess smoothing is commonly used to correct for trends in log-ratio data. However, parametric models, such as the additive plus multiplicative variance model, have been preferred for scale normalisation, though the variance structure of microarray data may be of a more complex nature than can be accommodated by a parametric model. We propose a new nonparametric approach that incorporates location and scale normalisation simultaneously using a Generalised Additive Model for Location, Scale and Shape (GAMLSS, Rigby and Stasinopoulos, 2005, Applied Statistics, 54, 507-554). We compare its performance in inferring differential expression with Huber et al.'s (2002, Bioinformatics, 18, 96-104) arsinh variance stabilising transformation (AVST) using real and simulated data. We show GAMLSS to be as powerful as AVST when the parametric model is correct, and more powerful when the model is wrong. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  16. Eigenvector centrality for geometric and topological characterization of porous media

    Science.gov (United States)

    Jimenez-Martinez, Joaquin; Negre, Christian F. A.

    2017-07-01

    Solving flow and transport through complex geometries such as porous media is computationally difficult. Such calculations usually involve the solution of a system of discretized differential equations, which could lead to extreme computational cost depending on the size of the domain and the accuracy of the model. Geometric simplifications like pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models, despite their ability to preserve the connectivity of the medium, have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Nonetheless, network theory approaches, where a complex network is a graph, can help to simplify and better understand fluid dynamics and transport in porous media. Here we present an alternative method to address these issues based on eigenvector centrality, which has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction to address the flow and transport anisotropy in porous media. We compare the model predictions with millifluidic transport experiments, which shows that, albeit simple, this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. We propose to use the eigenvector centrality probability distribution to compute the entropy as an indicator of the "mixing capacity" of the system.

  17. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  18. Asymptotics of eigenvalues and eigenvectors of Toeplitz matrices

    Science.gov (United States)

    Böttcher, A.; Bogoya, J. M.; Grudsky, S. M.; Maximenko, E. A.

    2017-11-01

    Analysis of the asymptotic behaviour of the spectral characteristics of Toeplitz matrices as the dimension of the matrix tends to infinity has a history of over 100 years. For instance, quite a number of versions of Szegő's theorem on the asymptotic behaviour of eigenvalues and of the so-called strong Szegő theorem on the asymptotic behaviour of the determinants of Toeplitz matrices are known. Starting in the 1950s, the asymptotics of the maximum and minimum eigenvalues were actively investigated. However, investigation of the individual asymptotics of all the eigenvalues and eigenvectors of Toeplitz matrices started only quite recently: the first papers on this subject were published in 2009-2010. A survey of this new field is presented here. Bibliography: 55 titles.

  19. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions

    LENUS (Irish Health Repository)

    Murray, Elizabeth

    2010-10-20

    Abstract Background The past decade has seen considerable interest in the development and evaluation of complex interventions to improve health. Such interventions can only have a significant impact on health and health care if they are shown to be effective when tested, are capable of being widely implemented and can be normalised into routine practice. To date, there is still a problematic gap between research and implementation. The Normalisation Process Theory (NPT) addresses the factors needed for successful implementation and integration of interventions into routine work (normalisation). Discussion In this paper, we suggest that the NPT can act as a sensitising tool, enabling researchers to think through issues of implementation while designing a complex intervention and its evaluation. The need to ensure trial procedures that are feasible and compatible with clinical practice is not limited to trials of complex interventions, and NPT may improve trial design by highlighting potential problems with recruitment or data collection, as well as ensuring the intervention has good implementation potential. Summary The NPT is a new theory which offers trialists a consistent framework that can be used to describe, assess and enhance implementation potential. We encourage trialists to consider using it in their next trial.

  20. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions

    Directory of Open Access Journals (Sweden)

    Ong Bie

    2010-10-01

    Full Text Available Abstract Background The past decade has seen considerable interest in the development and evaluation of complex interventions to improve health. Such interventions can only have a significant impact on health and health care if they are shown to be effective when tested, are capable of being widely implemented and can be normalised into routine practice. To date, there is still a problematic gap between research and implementation. The Normalisation Process Theory (NPT addresses the factors needed for successful implementation and integration of interventions into routine work (normalisation. Discussion In this paper, we suggest that the NPT can act as a sensitising tool, enabling researchers to think through issues of implementation while designing a complex intervention and its evaluation. The need to ensure trial procedures that are feasible and compatible with clinical practice is not limited to trials of complex interventions, and NPT may improve trial design by highlighting potential problems with recruitment or data collection, as well as ensuring the intervention has good implementation potential. Summary The NPT is a new theory which offers trialists a consistent framework that can be used to describe, assess and enhance implementation potential. We encourage trialists to consider using it in their next trial.

  1. Normalisation et certification dans le photovoltaïque: perspectives juridiques.

    OpenAIRE

    Boy , Laurence

    2012-01-01

    International audience; Legal approach of standardization in photovoltaic industry in France. Legal sources. Stakeholder"s liabillities. Competition aspects.; Approche juridique de la normalisation et de la certification dans le domaine du photovoltaïque en France. Sources du droit. Responsabilités des acteurs.Aspects concurrentiels.

  2. Total body neutron activation analysis of calcium: calibration and normalisation

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, N S.J.; Eastell, R; Ferrington, C M; Simpson, J D; Strong, J A [Western General Hospital, Edinburgh (UK); Smith, M A; Tothill, P [Royal Infirmary, Edinburgh (UK)

    1982-05-01

    An irradiation system has been designed, using a neutron beam from a cyclotron, which optimises the uniformity of activation of calcium. Induced activity is measured in a scanning, shadow-shield whole-body counter. Calibration has been effected and reproducibility assessed with three different types of phantom. Corrections were derived for variations in body height, depth and fat thickness. The coefficient of variation for repeated measurements of an anthropomorphic phantom was 1.8% for an absorbed dose equivalent of 13 mSv (1.3 rem). Measurements of total body calcium in 40 normal adults were used to derive normalisation factors which predict the normal calcium in a subject of given size and age. The coefficient of variation of normalised calcium was 6.2% in men and 6.6% in women, with the demonstration of an annual loss of 1.5% after the menopause. The narrow range should make single measurements useful for diagnostic purposes.

  3. Analysis of the characteristics of the global virtual water trade network using degree and eigenvector centrality, with a focus on food and feed crops

    Directory of Open Access Journals (Sweden)

    S.-H. Lee

    2016-10-01

    Full Text Available This study aims to analyze the characteristics of global virtual water trade (GVWT, such as the connectivity of each trader, vulnerable importers, and influential countries, using degree and eigenvector centrality during the period 2006–2010. The degree centrality was used to measure the connectivity, and eigenvector centrality was used to measure the influence on the entire GVWT network. Mexico, Egypt, China, the Republic of Korea, and Japan were classified as vulnerable importers, because they imported large quantities of virtual water with low connectivity. In particular, Egypt had a 15.3 Gm3 year−1 blue water saving effect through GVWT: the vulnerable structure could cause a water shortage problem for the importer. The entire GVWT network could be changed by a few countries, termed "influential traders". We used eigenvector centrality to identify those influential traders. In GVWT for food crops, the USA, Russian Federation, Thailand, and Canada had high eigenvector centrality with large volumes of green water trade. In the case of blue water trade, western Asia, Pakistan, and India had high eigenvector centrality. For feed crops, the green water trade in the USA, Brazil, and Argentina was the most influential. However, Argentina and Pakistan used high proportions of internal water resources for virtual water export (32.9 and 25.1 %; thus other traders should carefully consider water resource management in these exporters.

  4. Analysis of the characteristics of the global virtual water trade network using degree and eigenvector centrality, with a focus on food and feed crops

    Science.gov (United States)

    Lee, Sang-Hyun; Mohtar, Rabi H.; Choi, Jin-Yong; Yoo, Seung-Hwan

    2016-10-01

    This study aims to analyze the characteristics of global virtual water trade (GVWT), such as the connectivity of each trader, vulnerable importers, and influential countries, using degree and eigenvector centrality during the period 2006-2010. The degree centrality was used to measure the connectivity, and eigenvector centrality was used to measure the influence on the entire GVWT network. Mexico, Egypt, China, the Republic of Korea, and Japan were classified as vulnerable importers, because they imported large quantities of virtual water with low connectivity. In particular, Egypt had a 15.3 Gm3 year-1 blue water saving effect through GVWT: the vulnerable structure could cause a water shortage problem for the importer. The entire GVWT network could be changed by a few countries, termed "influential traders". We used eigenvector centrality to identify those influential traders. In GVWT for food crops, the USA, Russian Federation, Thailand, and Canada had high eigenvector centrality with large volumes of green water trade. In the case of blue water trade, western Asia, Pakistan, and India had high eigenvector centrality. For feed crops, the green water trade in the USA, Brazil, and Argentina was the most influential. However, Argentina and Pakistan used high proportions of internal water resources for virtual water export (32.9 and 25.1 %); thus other traders should carefully consider water resource management in these exporters.

  5. Évolution de la normalisation dans le domaine des oléagineux et des corps gras

    Directory of Open Access Journals (Sweden)

    Quinsac Alain

    2003-07-01

    Full Text Available La normalisation joue un grand rôle dans les échanges économiques en participant à l’ouverture et à la transparence des marchés. La filière des Oléagineux et des Corps Gras a intégré depuis longtemps la normalisation dans sa stratégie. Élaborés à partir des besoins de la profession et notamment au niveau de la relation client-fournisseur, les programmes ont concerné principalement l’échantillonnage et l’analyse. Depuis quelques années, une forte évolution du contexte socio-économique et réglementaire (utilisation non-alimentaire, sécurité alimentaire, assurance qualité, a élargi le champ de la normalisation. La démarche normative adoptée dans le cas des bio-diesels et de la détection des OGM dans les oléagineux est expliquée. Les conséquences de l’évolution de la normalisation et les enjeux pour la profession des oléagineux dans le futur sont évoqués.

  6. Eigenvector Subset Selection Using Bayesian Optimization Algorithm%基于贝叶斯优化算法的脸面特征向量子集选择

    Institute of Scientific and Technical Information of China (English)

    郭卫锋; 林亚平; 罗光平

    2002-01-01

    Eigenvector subset selection is the key to face recognition. In this paper ,we propose ESS-BOA, a newrandomized, population-based evolutionary algorithm which deals with the Eigenvector Subset Selection (ESS)prob-lem on face recognition application. In ESS-BOA ,the ESS problem, stated as a search problem ,uses the BayesianOptimization Algorithm (BOA) as searching engine and the distance degree as the object function to select eigenvec-tor. Experimental results show that ESS-BOA outperforms the traditional the eigenface selection algorithm.

  7. Computation of dominant eigenvalues and eigenvectors: A comparative study of algorithms

    International Nuclear Information System (INIS)

    Nightingale, M.P.; Viswanath, V.S.; Mueller, G.

    1993-01-01

    We investigate two widely used recursive algorithms for the computation of eigenvectors with extreme eigenvalues of large symmetric matrices---the modified Lanczoes method and the conjugate-gradient method. The goal is to establish a connection between their underlying principles and to evaluate their performance in applications to Hamiltonian and transfer matrices of selected model systems of interest in condensed matter physics and statistical mechanics. The conjugate-gradient method is found to converge more rapidly for understandable reasons, while storage requirements are the same for both methods

  8. Normalisation of spot urine samples to 24-h collection for assessment of exposure to uranium

    International Nuclear Information System (INIS)

    Marco, R.; Katorza, E.; Gonen, R.; German, U.; Tshuva, A.; Pelled, O.; Paz-tal, O.; Adout, A.; Karpas, Z.

    2008-01-01

    For dose assessment of workers at Nuclear Research Center Negev exposed to natural uranium, spot urine samples are analysed and the results are normalised to 24-h urine excretion based on 'standard' man urine volume of 1.6 l d -1 . In the present work, the urine volume, uranium level and creatinine concentration were determined in two or three 24-h urine collections from 133 male workers (319 samples) and 33 female workers (88 samples). Three volunteers provided urine spot samples from each voiding during a 24-h period and a good correlation was found between the relative level of creatinine and uranium in spot samples collected from the same individual. The results show that normalisation of uranium concentration to creatinine in a spot sample represents the 24-h content of uranium better than normalisation to the standard volume and may be used to reduce the uncertainty of dose assessment based on spot samples. (authors)

  9. Eigenvector Weighting Function in Face Recognition

    Directory of Open Access Journals (Sweden)

    Pang Ying Han

    2011-01-01

    Full Text Available Graph-based subspace learning is a class of dimensionality reduction technique in face recognition. The technique reveals the local manifold structure of face data that hidden in the image space via a linear projection. However, the real world face data may be too complex to measure due to both external imaging noises and the intra-class variations of the face images. Hence, features which are extracted by the graph-based technique could be noisy. An appropriate weight should be imposed to the data features for better data discrimination. In this paper, a piecewise weighting function, known as Eigenvector Weighting Function (EWF, is proposed and implemented in two graph based subspace learning techniques, namely Locality Preserving Projection and Neighbourhood Preserving Embedding. Specifically, the computed projection subspace of the learning approach is decomposed into three partitions: a subspace due to intra-class variations, an intrinsic face subspace, and a subspace which is attributed to imaging noises. Projected data features are weighted differently in these subspaces to emphasize the intrinsic face subspace while penalizing the other two subspaces. Experiments on FERET and FRGC databases are conducted to show the promising performance of the proposed technique.

  10. Protein structure recognition: From eigenvector analysis to structural threading method

    Science.gov (United States)

    Cao, Haibo

    In this work, we try to understand the protein folding problem using pair-wise hydrophobic interaction as the dominant interaction for the protein folding process. We found a strong correlation between amino acid sequence and the corresponding native structure of the protein. Some applications of this correlation were discussed in this dissertation include the domain partition and a new structural threading method as well as the performance of this method in the CASP5 competition. In the first part, we give a brief introduction to the protein folding problem. Some essential knowledge and progress from other research groups was discussed. This part include discussions of interactions among amino acids residues, lattice HP model, and the designablity principle. In the second part, we try to establish the correlation between amino acid sequence and the corresponding native structure of the protein. This correlation was observed in our eigenvector study of protein contact matrix. We believe the correlation is universal, thus it can be used in automatic partition of protein structures into folding domains. In the third part, we discuss a threading method based on the correlation between amino acid sequence and ominant eigenvector of the structure contact-matrix. A mathematically straightforward iteration scheme provides a self-consistent optimum global sequence-structure alignment. The computational efficiency of this method makes it possible to search whole protein structure databases for structural homology without relying on sequence similarity. The sensitivity and specificity of this method is discussed, along with a case of blind test prediction. In the appendix, we list the overall performance of this threading method in CASP5 blind test in comparison with other existing approaches.

  11. Protein Structure Recognition: From Eigenvector Analysis to Structural Threading Method

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Haibo [Iowa State Univ., Ames, IA (United States)

    2003-01-01

    In this work, they try to understand the protein folding problem using pair-wise hydrophobic interaction as the dominant interaction for the protein folding process. They found a strong correlation between amino acid sequences and the corresponding native structure of the protein. Some applications of this correlation were discussed in this dissertation include the domain partition and a new structural threading method as well as the performance of this method in the CASP5 competition. In the first part, they give a brief introduction to the protein folding problem. Some essential knowledge and progress from other research groups was discussed. This part includes discussions of interactions among amino acids residues, lattice HP model, and the design ability principle. In the second part, they try to establish the correlation between amino acid sequence and the corresponding native structure of the protein. This correlation was observed in the eigenvector study of protein contact matrix. They believe the correlation is universal, thus it can be used in automatic partition of protein structures into folding domains. In the third part, they discuss a threading method based on the correlation between amino acid sequences and ominant eigenvector of the structure contact-matrix. A mathematically straightforward iteration scheme provides a self-consistent optimum global sequence-structure alignment. The computational efficiency of this method makes it possible to search whole protein structure databases for structural homology without relying on sequence similarity. The sensitivity and specificity of this method is discussed, along with a case of blind test prediction. In the appendix, they list the overall performance of this threading method in CASP5 blind test in comparison with other existing approaches.

  12. Protein Structure Recognition: From Eigenvector Analysis to Structural Threading Method

    International Nuclear Information System (INIS)

    Haibo Cao

    2003-01-01

    In this work, they try to understand the protein folding problem using pair-wise hydrophobic interaction as the dominant interaction for the protein folding process. They found a strong correlation between amino acid sequences and the corresponding native structure of the protein. Some applications of this correlation were discussed in this dissertation include the domain partition and a new structural threading method as well as the performance of this method in the CASP5 competition. In the first part, they give a brief introduction to the protein folding problem. Some essential knowledge and progress from other research groups was discussed. This part includes discussions of interactions among amino acids residues, lattice HP model, and the design ability principle. In the second part, they try to establish the correlation between amino acid sequence and the corresponding native structure of the protein. This correlation was observed in the eigenvector study of protein contact matrix. They believe the correlation is universal, thus it can be used in automatic partition of protein structures into folding domains. In the third part, they discuss a threading method based on the correlation between amino acid sequences and ominant eigenvector of the structure contact-matrix. A mathematically straightforward iteration scheme provides a self-consistent optimum global sequence-structure alignment. The computational efficiency of this method makes it possible to search whole protein structure databases for structural homology without relying on sequence similarity. The sensitivity and specificity of this method is discussed, along with a case of blind test prediction. In the appendix, they list the overall performance of this threading method in CASP5 blind test in comparison with other existing approaches

  13. Reference gene identification for reliable normalisation of quantitative RT-PCR data in Setaria viridis.

    Science.gov (United States)

    Nguyen, Duc Quan; Eamens, Andrew L; Grof, Christopher P L

    2018-01-01

    Quantitative real-time polymerase chain reaction (RT-qPCR) is the key platform for the quantitative analysis of gene expression in a wide range of experimental systems and conditions. However, the accuracy and reproducibility of gene expression quantification via RT-qPCR is entirely dependent on the identification of reliable reference genes for data normalisation. Green foxtail ( Setaria viridis ) has recently been proposed as a potential experimental model for the study of C 4 photosynthesis and is closely related to many economically important crop species of the Panicoideae subfamily of grasses, including Zea mays (maize), Sorghum bicolor (sorghum) and Sacchurum officinarum (sugarcane). Setaria viridis (Accession 10) possesses a number of key traits as an experimental model, namely; (i) a small sized, sequenced and well annotated genome; (ii) short stature and generation time; (iii) prolific seed production, and; (iv) is amendable to Agrobacterium tumefaciens -mediated transformation. There is currently however, a lack of reference gene expression information for Setaria viridis ( S. viridis ). We therefore aimed to identify a cohort of suitable S. viridis reference genes for accurate and reliable normalisation of S. viridis RT-qPCR expression data. Eleven putative candidate reference genes were identified and examined across thirteen different S. viridis tissues. Of these, the geNorm and NormFinder analysis software identified SERINE / THERONINE - PROTEIN PHOSPHATASE 2A ( PP2A ), 5 '- ADENYLYLSULFATE REDUCTASE 6 ( ASPR6 ) and DUAL SPECIFICITY PHOSPHATASE ( DUSP ) as the most suitable combination of reference genes for the accurate and reliable normalisation of S. viridis RT-qPCR expression data. To demonstrate the suitability of the three selected reference genes, PP2A , ASPR6 and DUSP , were used to normalise the expression of CINNAMYL ALCOHOL DEHYDROGENASE ( CAD ) genes across the same tissues. This approach readily demonstrated the suitably of the three

  14. Oral benfotiamine plus alpha-lipoic acid normalises complication-causing pathways in type 1 diabetes.

    Science.gov (United States)

    Du, X; Edelstein, D; Brownlee, M

    2008-10-01

    We determined whether fixed doses of benfotiamine in combination with slow-release alpha-lipoic acid normalise markers of reactive oxygen species-induced pathways of complications in humans. Male participants with and without type 1 diabetes were studied in the General Clinical Research Centre of the Albert Einstein College of Medicine. Glycaemic status was assessed by measuring baseline values of three different indicators of hyperglycaemia. Intracellular AGE formation, hexosamine pathway activity and prostacyclin synthase activity were measured initially, and after 2 and 4 weeks of treatment. In the nine participants with type 1 diabetes, treatment had no effect on any of the three indicators used to assess hyperglycaemia. However, treatment with benfotiamine plus alpha-lipoic acid completely normalised increased AGE formation, reduced increased monocyte hexosamine-modified proteins by 40% and normalised the 70% decrease in prostacyclin synthase activity from 1,709 +/- 586 pg/ml 6-keto-prostaglandin F(1alpha) to 4,696 +/- 533 pg/ml. These results show that the previously demonstrated beneficial effects of these agents on complication-causing pathways in rodent models of diabetic complications also occur in humans with type 1 diabetes.

  15. Investigation, development and application of optimal output feedback theory. Vol. 4: Measures of eigenvalue/eigenvector sensitivity to system parameters and unmodeled dynamics

    Science.gov (United States)

    Halyo, Nesim

    1987-01-01

    Some measures of eigenvalue and eigenvector sensitivity applicable to both continuous and discrete linear systems are developed and investigated. An infinite series representation is developed for the eigenvalues and eigenvectors of a system. The coefficients of the series are coupled, but can be obtained recursively using a nonlinear coupled vector difference equation. A new sensitivity measure is developed by considering the effects of unmodeled dynamics. It is shown that the sensitivity is high when any unmodeled eigenvalue is near a modeled eigenvalue. Using a simple example where the sensor dynamics have been neglected, it is shown that high feedback gains produce high eigenvalue/eigenvector sensitivity. The smallest singular value of the return difference is shown not to reflect eigenvalue sensitivity since it increases with the feedback gains. Using an upper bound obtained from the infinite series, a procedure to evaluate whether the sensitivity to parameter variations is within given acceptable bounds is developed and demonstrated by an example.

  16. q-Extension of Mehta's eigenvectors of the finite Fourier transform for q, a root of unity

    NARCIS (Netherlands)

    Atakishiyeva, M.K.; Atakishiyev, N.M.; Koornwinder, T.H.

    2009-01-01

    It is shown that the continuous q-Hermite polynomials for q, a root of unity, have simple transformation properties with respect to the classical Fourier transform. This result is then used to construct q-extended eigenvectors of the finite Fourier transform in terms of these polynomials.

  17. Bounded real and positive real balanced truncation using Σ-normalised coprime factors

    NARCIS (Netherlands)

    Trentelman, H.L.

    2009-01-01

    In this article, we will extend the method of balanced truncation using normalised right coprime factors of the system transfer matrix to balanced truncation with preservation of half line dissipativity. Special cases are preservation of positive realness and bounded realness. We consider a half

  18. Use and misuse of temperature normalisation in meta-analyses of thermal responses of biological traits

    Directory of Open Access Journals (Sweden)

    Dimitrios - Georgios Kontopoulos

    2018-02-01

    Full Text Available There is currently unprecedented interest in quantifying variation in thermal physiology among organisms, especially in order to understand and predict the biological impacts of climate change. A key parameter in this quantification of thermal physiology is the performance or value of a rate, across individuals or species, at a common temperature (temperature normalisation. An increasingly popular model for fitting thermal performance curves to data—the Sharpe-Schoolfield equation—can yield strongly inflated estimates of temperature-normalised rate values. These deviations occur whenever a key thermodynamic assumption of the model is violated, i.e., when the enzyme governing the performance of the rate is not fully functional at the chosen reference temperature. Using data on 1,758 thermal performance curves across a wide range of species, we identify the conditions that exacerbate this inflation. We then demonstrate that these biases can compromise tests to detect metabolic cold adaptation, which requires comparison of fitness or rate performance of different species or genotypes at some fixed low temperature. Finally, we suggest alternative methods for obtaining unbiased estimates of temperature-normalised rate values for meta-analyses of thermal performance across species in climate change impact studies.

  19. Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation

    Directory of Open Access Journals (Sweden)

    Deniz Erdogmus

    2004-10-01

    Full Text Available Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance, and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger's rule and APEX, as well as a structurally similar matrix perturbation-based method.

  20. Detection of multiple damages employing best achievable eigenvectors under Bayesian inference

    Science.gov (United States)

    Prajapat, Kanta; Ray-Chaudhuri, Samit

    2018-05-01

    A novel approach is presented in this work to localize simultaneously multiple damaged elements in a structure along with the estimation of damage severity for each of the damaged elements. For detection of damaged elements, a best achievable eigenvector based formulation has been derived. To deal with noisy data, Bayesian inference is employed in the formulation wherein the likelihood of the Bayesian algorithm is formed on the basis of errors between the best achievable eigenvectors and the measured modes. In this approach, the most probable damage locations are evaluated under Bayesian inference by generating combinations of various possible damaged elements. Once damage locations are identified, damage severities are estimated using a Bayesian inference Markov chain Monte Carlo simulation. The efficiency of the proposed approach has been demonstrated by carrying out a numerical study involving a 12-story shear building. It has been found from this study that damage scenarios involving as low as 10% loss of stiffness in multiple elements are accurately determined (localized and severities quantified) even when 2% noise contaminated modal data are utilized. Further, this study introduces a term parameter impact (evaluated based on sensitivity of modal parameters towards structural parameters) to decide the suitability of selecting a particular mode, if some idea about the damaged elements are available. It has been demonstrated here that the accuracy and efficiency of the Bayesian quantification algorithm increases if damage localization is carried out a-priori. An experimental study involving a laboratory scale shear building and different stiffness modification scenarios shows that the proposed approach is efficient enough to localize the stories with stiffness modification.

  1. A combination of low-dose bevacizumab and imatinib enhances vascular normalisation without inducing extracellular matrix deposition.

    Science.gov (United States)

    Schiffmann, L M; Brunold, M; Liwschitz, M; Goede, V; Loges, S; Wroblewski, M; Quaas, A; Alakus, H; Stippel, D; Bruns, C J; Hallek, M; Kashkar, H; Hacker, U T; Coutelle, O

    2017-02-28

    Vascular endothelial growth factor (VEGF)-targeting drugs normalise the tumour vasculature and improve access for chemotherapy. However, excessive VEGF inhibition fails to improve clinical outcome, and successive treatment cycles lead to incremental extracellular matrix (ECM) deposition, which limits perfusion and drug delivery. We show here, that low-dose VEGF inhibition augmented with PDGF-R inhibition leads to superior vascular normalisation without incremental ECM deposition thus maintaining access for therapy. Collagen IV expression was analysed in response to VEGF inhibition in liver metastasis of colorectal cancer (CRC) patients, in syngeneic (Panc02) and xenograft tumours of human colorectal cancer cells (LS174T). The xenograft tumours were treated with low (0.5 mg kg -1 body weight) or high (5 mg kg -1 body weight) doses of the anti-VEGF antibody bevacizumab with or without the tyrosine kinase inhibitor imatinib. Changes in tumour growth, and vascular parameters, including microvessel density, pericyte coverage, leakiness, hypoxia, perfusion, fraction of vessels with an open lumen, and type IV collagen deposition were compared. ECM deposition was increased after standard VEGF inhibition in patients and tumour models. In contrast, treatment with low-dose bevacizumab and imatinib produced similar growth inhibition without inducing detrimental collagen IV deposition, leading to superior vascular normalisation, reduced leakiness, improved oxygenation, more open vessels that permit perfusion and access for therapy. Low-dose bevacizumab augmented by imatinib selects a mature, highly normalised and well perfused tumour vasculature without inducing incremental ECM deposition that normally limits the effectiveness of VEGF targeting drugs.

  2. Heisenberg XXX Model with General Boundaries: Eigenvectors from Algebraic Bethe Ansatz

    Directory of Open Access Journals (Sweden)

    Samuel Belliard

    2013-11-01

    Full Text Available We propose a generalization of the algebraic Bethe ansatz to obtain the eigenvectors of the Heisenberg spin chain with general boundaries associated to the eigenvalues and the Bethe equations found recently by Cao et al. The ansatz takes the usual form of a product of operators acting on a particular vector except that the number of operators is equal to the length of the chain. We prove this result for the chains with small length. We obtain also an off-shell equation (i.e. satisfied without the Bethe equations formally similar to the ones obtained in the periodic case or with diagonal boundaries.

  3. An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants

    DEFF Research Database (Denmark)

    Møller, Jesper; Pettitt, A. N.; Reeves, R.

    2006-01-01

    Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable metho...

  4. Identification of endogenous control genes for normalisation of real-time quantitative PCR data in colorectal cancer.

    LENUS (Irish Health Repository)

    Kheirelseid, Elrasheid A H

    2010-01-01

    BACKGROUND: Gene expression analysis has many applications in cancer diagnosis, prognosis and therapeutic care. Relative quantification is the most widely adopted approach whereby quantification of gene expression is normalised relative to an endogenously expressed control (EC) gene. Central to the reliable determination of gene expression is the choice of control gene. The purpose of this study was to evaluate a panel of candidate EC genes from which to identify the most stably expressed gene(s) to normalise RQ-PCR data derived from primary colorectal cancer tissue. RESULTS: The expression of thirteen candidate EC genes: B2M, HPRT, GAPDH, ACTB, PPIA, HCRT, SLC25A23, DTX3, APOC4, RTDR1, KRTAP12-3, CHRNB4 and MRPL19 were analysed in a cohort of 64 colorectal tumours and tumour associated normal specimens. CXCL12, FABP1, MUC2 and PDCD4 genes were chosen as target genes against which a comparison of the effect of each EC gene on gene expression could be determined. Data analysis using descriptive statistics, geNorm, NormFinder and qBasePlus indicated significant difference in variances between candidate EC genes. We determined that two genes were required for optimal normalisation and identified B2M and PPIA as the most stably expressed and reliable EC genes. CONCLUSION: This study identified that the combination of two EC genes (B2M and PPIA) more accurately normalised RQ-PCR data in colorectal tissue. Although these control genes might not be optimal for use in other cancer studies, the approach described herein could serve as a template for the identification of valid ECs in other cancer types.

  5. Normalisation of the peaceful use of nuclear energy - consequences for its legal regulation

    International Nuclear Information System (INIS)

    Birkhofer, A.; Lukes, R.

    1985-01-01

    The five reports in this book deal with the importance of the peaceful use of nuclear energy, as well as with several aspects of normalisation. The spectrum of the reports underlines the benefit for the support of the peaceful use of nuclear energy. (WG) [de

  6. On a q-extension of Mehta's eigenvectors of the finite Fourier transform for q a root of unity

    OpenAIRE

    Atakishiyeva, Mesuma K.; Atakishiyev, Natig M.; Koornwinder, Tom H.

    2008-01-01

    It is shown that the continuous q-Hermite polynomials for q a root of unity have simple transformation properties with respect to the classical Fourier transform. This result is then used to construct q-extended eigenvectors of the finite Fourier transform in terms of these polynomials.

  7. Application Research of the Sparse Representation of Eigenvector on the PD Positioning in the Transformer Oil

    Directory of Open Access Journals (Sweden)

    Qing Xie

    2016-01-01

    Full Text Available The partial discharge (PD detection of electrical equipment is important for the safe operation of power system. The ultrasonic signal generated by the PD in the oil is a broadband signal. However, most methods of the array signal processing are used for the narrowband signal at present, and the effect of some methods for processing wideband signals is not satisfactory. Therefore, it is necessary to find new broadband signal processing methods to improve detection ability of the PD source. In this paper, the direction of arrival (DOA estimation method based on sparse representation of eigenvector is proposed, and this method can further reduce the noise interference. Moreover, the simulation results show that this direction finding method is feasible for broadband signal and thus improve the following positioning accuracy of the three-array localization method. And experimental results verify that the direction finding method based on sparse representation of eigenvector is feasible for the ultrasonic array, which can achieve accurate estimation of direction of arrival and improve the following positioning accuracy. This can provide important guidance information for the equipment maintenance in the practical application.

  8. Eigenvectors and fixed point of non-linear operators

    Directory of Open Access Journals (Sweden)

    Giulio Trombetta

    2007-12-01

    Full Text Available Let X be a real infinite-dimensional Banach space and ψ a measure of noncompactness on X. Let Ω be a bounded open subset of X and A : Ω → X a ψ-condensing operator, which has no fixed points on ∂Ω.Then the fixed point index, ind(A,Ω, of A on Ω is defined (see, for example, ([1] and [18]. In particular, if A is a compact operator ind(A,Ω agrees with the classical Leray-Schauder degree of I −A on Ω relative to the point 0, deg(I −A,Ω,0. The main aim of this note is to investigate boundary conditions, under which the fixed point index of strict- ψ-contractive or ψ-condensing operators A : Ω → X is equal to zero. Correspondingly, results on eigenvectors and nonzero fixed points of k-ψ-contractive and ψ-condensing operators are obtained. In particular we generalize the Birkhoff-Kellog theorem [4] and Guo’s domain compression and expansion theorem [17]. The note is based mainly on the results contained in [7] and [8].

  9. Relationships between the normalised difference vegetation index and temperature fluctuations in post-mining sites

    Czech Academy of Sciences Publication Activity Database

    Bujalský, L.; Jirka, V.; Zemek, František; Frouz, J.

    2018-01-01

    Roč. 32, č. 4 (2018), s. 254-263 ISSN 1748-0930 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:67179843 Keywords : temperature * normalised difference * vegetation index (NDVI) * vegetation cover * remote sensing Subject RIV: DF - Soil Science Impact factor: 1.078, year: 2016

  10. A normalisation for the four - detector system for gamma - gamma angular correlation studies

    International Nuclear Information System (INIS)

    Kiang, G.C.; Chen, C.H.; Niu, W.F.

    1994-01-01

    A normalisation method for the multiple - HPGe - detector system is described. The system consists of four coaxial HPGe detectors with a CAMAC event - by - event data acquisition system, enabling to measure six gamma -gamma coincidences of angles simultaneously. An application for gamma - gamma correlation studies of Kr 82 is presented and discussed. 3 figs., 6 refs. (author)

  11. Normalisation of body composition parameters for nutritional assessment

    International Nuclear Information System (INIS)

    Preston, Thomas

    2014-01-01

    Full text: Normalisation of body composition parameters to an index of body size facilitates comparison of a subject’s measurements with those of a population. There is an obvious focus on indexes of obesity, but first it is informative to consider Fat Free Mass (FFM) in the context of common anthropometric measures of body size namely, height and weight. The contention is that FFM is a more physiological measure of body size than body mass. Many studies have shown that FFM relates to height ^p. Although there is debate over the appropriate exponent especially in early life, it appears to lie between 2 and 3. If 2, then FFM Index (FFMI; kg/m2) and Fat Mass Index (FMI; kg/m2) can be summed to give BMI. If 3 were used as exponent, then FFMI (kg/m3) plus FMI (kg/m3) gives the Ponderal Index (PI; weight/height3). In 2013, Burton argued that that a cubic exponent is appropriate for normalisation as it is a dimensionless quotient. In 2012, Wang and co-workers repeated earlier observations showing a strong linear relationship between FFM and height3. The importance of the latter study comes from the fact that a 4 compartment body composition model was used, which is recognised as the most accurate means of describing FFM. Once the basis of a FFMI has been defined it can be used to compare measurements with those of a population, either directly, as a ratio to a norm or as a Z-score. FFMI charts could be developed for use in child growth. Other related indexes can be determined for use in specific circumstances such as: body cell mass index (growth and wasting); skeletal muscle mass index (SMMI) or appendicular SMMI (growth and sarcopenia); bone mineral mass index (osteoporosis); extracellular fluid index (hydration). Finally, it is logical that the same system is used to define an adiposity index, so Fat Mass Index (FMI; kg/height3) can be used as it is consistent with FFMI (kg/height3) and PI. It should also be noted that the index FM/FFM, describes an individual

  12. Multivariate analysis of eigenvalues and eigenvectors in tensor based morphometry

    Science.gov (United States)

    Rajagopalan, Vidya; Schwartzman, Armin; Hua, Xue; Leow, Alex; Thompson, Paul; Lepore, Natasha

    2015-01-01

    We develop a new algorithm to compute voxel-wise shape differences in tensor-based morphometry (TBM). As in standard TBM, we non-linearly register brain T1-weighed MRI data from a patient and control group to a template, and compute the Jacobian of the deformation fields. In standard TBM, the determinants of the Jacobian matrix at each voxel are statistically compared between the two groups. More recently, a multivariate extension of the statistical analysis involving the deformation tensors derived from the Jacobian matrices has been shown to improve statistical detection power.7 However, multivariate methods comprising large numbers of variables are computationally intensive and may be subject to noise. In addition, the anatomical interpretation of results is sometimes difficult. Here instead, we analyze the eigenvalues and the eigenvectors of the Jacobian matrices. Our method is validated on brain MRI data from Alzheimer's patients and healthy elderly controls from the Alzheimer's Disease Neuro Imaging Database.

  13. Confluence via strong normalisation in an algebraic λ-calculus with rewriting

    Directory of Open Access Journals (Sweden)

    Pablo Buiras

    2012-03-01

    Full Text Available The linear-algebraic lambda-calculus and the algebraic lambda-calculus are untyped lambda-calculi extended with arbitrary linear combinations of terms. The former presents the axioms of linear algebra in the form of a rewrite system, while the latter uses equalities. When given by rewrites, algebraic lambda-calculi are not confluent unless further restrictions are added. We provide a type system for the linear-algebraic lambda-calculus enforcing strong normalisation, which gives back confluence. The type system allows an abstract interpretation in System F.

  14. Quantification of tumour {sup 18}F-FDG uptake: Normalise to blood glucose or scale to liver uptake?

    Energy Technology Data Exchange (ETDEWEB)

    Keramida, Georgia [Brighton and Sussex Medical School, Clinical Imaging Sciences Centre, Brighton (United Kingdom); Brighton and Sussex University Hospitals NHS Trust, Department of Nuclear Medicine, Brighton (United Kingdom); University of Sussex, Clinical Imaging Sciences Centre, Brighton (United Kingdom); Dizdarevic, Sabina; Peters, A.M. [Brighton and Sussex Medical School, Clinical Imaging Sciences Centre, Brighton (United Kingdom); Brighton and Sussex University Hospitals NHS Trust, Department of Nuclear Medicine, Brighton (United Kingdom); Bush, Janice [Brighton and Sussex Medical School, Clinical Imaging Sciences Centre, Brighton (United Kingdom)

    2015-09-15

    To compare normalisation to blood glucose (BG) with scaling to hepatic uptake for quantification of tumour {sup 18}F-FDG uptake using the brain as a surrogate for tumours. Standardised uptake value (SUV) was measured over the liver, cerebellum, basal ganglia, and frontal cortex in 304 patients undergoing {sup 18}F-FDG PET/CT. The relationship between brain FDG clearance and SUV was theoretically defined. Brain SUV decreased exponentially with BG, with similar constants between cerebellum, basal ganglia, and frontal cortex (0.099-0.119 mmol/l{sup -1}) and similar to values for tumours estimated from the literature. Liver SUV, however, correlated positively with BG. Brain-to-liver SUV ratio therefore showed an inverse correlation with BG, well-fitted with a hyperbolic function (R = 0.83), as theoretically predicted. Brain SUV normalised to BG (nSUV) displayed a nonlinear correlation with BG (R = 0.55); however, as theoretically predicted, brain nSUV/liver SUV showed almost no correlation with BG. Correction of brain SUV using BG raised to an exponential power of 0.099 mmol/l{sup -1} also eliminated the correlation between brain SUV and BG. Brain SUV continues to correlate with BG after normalisation to BG. Likewise, liver SUV is unsuitable as a reference for tumour FDG uptake. Brain SUV divided by liver SUV, however, shows minimal dependence on BG. (orig.)

  15. A normalised seawater strontium isotope curve. Possible implications for Neoproterozoic-Cambrian weathering rates and the further oxygenation of the Earth

    International Nuclear Information System (INIS)

    Shields, G.A.

    2007-01-01

    The strontium isotope composition of seawater is strongly influenced on geological time scales by changes in the rates of continental weathering relative to ocean crust alteration. However, the potential of the seawater 87 Sr/ 86 Sr curve to trace globally integrated chemical weathering rates has not been fully realised because ocean 87 Sr/ 86 Sr is also influenced by the isotopic evolution of Sr sources to the ocean. A preliminary attempt is made here to normalise the seawater 87 Sr/ 86 Sr curve to plausible trends in the 87 Sr/ 86 Sr ratios of the three major Sr sources: carbonate dissolution, silicate weathering and submarine hydrothermal exchange. The normalised curve highlights the Neoproterozoic-Phanerozoic transition as a period of exceptionally high continental influence, indicating that this interval was characterised by a transient increase in global weathering rates and/or by the weathering of unusually radiogenic crustal rocks. Close correlation between the normalised 87 Sr/ 86 Sr curve, a published seawater δ 34 S curve and atmospheric pCO 2 models is used here to argue that elevated chemical weathering rates were a major contributing factor to the steep rise in seawater 87 Sr/ 86 Sr from 650 Ma to 500 Ma. Elevated weathering rates during the Neoproterozoic-Cambrian interval led to increased nutrient availability, organic burial and to the further oxygenation of Earth's surface environment. Use of normalised seawater 87 Sr/ 86 Sr curves will, it is hoped, help to improve future geochemical models of Earth System dynamics. (orig.)

  16. Learning from doing: the case for combining normalisation process theory and participatory learning and action research methodology for primary healthcare implementation research.

    Science.gov (United States)

    de Brún, Tomas; O'Reilly-de Brún, Mary; O'Donnell, Catherine A; MacFarlane, Anne

    2016-08-03

    The implementation of research findings is not a straightforward matter. There are substantive and recognised gaps in the process of translating research findings into practice and policy. In order to overcome some of these translational difficulties, a number of strategies have been proposed for researchers. These include greater use of theoretical approaches in research focused on implementation, and use of a wider range of research methods appropriate to policy questions and the wider social context in which they are placed. However, questions remain about how to combine theory and method in implementation research. In this paper, we respond to these proposals. Focussing on a contemporary social theory, Normalisation Process Theory, and a participatory research methodology, Participatory Learning and Action, we discuss the potential of their combined use for implementation research. We note ways in which Normalisation Process Theory and Participatory Learning and Action are congruent and may therefore be used as heuristic devices to explore, better understand and support implementation. We also provide examples of their use in our own research programme about community involvement in primary healthcare. Normalisation Process Theory alone has, to date, offered useful explanations for the success or otherwise of implementation projects post-implementation. We argue that Normalisation Process Theory can also be used to prospectively support implementation journeys. Furthermore, Normalisation Process Theory and Participatory Learning and Action can be used together so that interventions to support implementation work are devised and enacted with the expertise of key stakeholders. We propose that the specific combination of this theory and methodology possesses the potential, because of their combined heuristic force, to offer a more effective means of supporting implementation projects than either one might do on its own, and of providing deeper understandings of

  17. Normalisation: ROI optimal treatment planning - SNDH pattern

    International Nuclear Information System (INIS)

    Shilvat, D.V.; Bhandari, Virendra; Tamane, Chandrashekhar; Pangam, Suresh

    2001-01-01

    Dose precision maximally to the target / ROI (Region of Interest), taking care of tolerance dose of normal tissue is the aim of ideal treatment planning. This goal is achieved with advanced modalities such as, micro MLC, simulator and 3-dimensional treatment planning system. But SNDH PATTERN uses minimum available resources as, ALCYON II Telecobalt unit, CT Scan, MULTIDATA 2-dimensional treatment planning system to their maximum utility and reaches to the required precision, same as that with advance modalities. Among the number of parameters used, 'NORMALISATION TO THE ROI' will achieve the aim of the treatment planning effectively. This is dealing with an example of canal of esophagus modified treatment planning based on SNDH pattern. Results are attractive and self explanatory. By implementing SNDH pattern, the QUALITY INDEX of treatment plan will reach to greater than 90%, with substantial reduction in dose to the vital organs. Aim is to utilize the minimum available resources efficiently to achieve highest possible precision for delivering homogenous dose to ROI while taking care of tolerance dose to vital organs

  18. Technology, normalisation and male sex work.

    Science.gov (United States)

    MacPhail, Catherine; Scott, John; Minichiello, Victor

    2015-01-01

    Technological change, particularly the growth of the Internet and smart phones, has increased the visibility of male escorts, expanded their client base and diversified the range of venues in which male sex work can take place. Specifically, the Internet has relocated some forms of male sex work away from the street and thereby increased market reach, visibility and access and the scope of sex work advertising. Using the online profiles of 257 male sex workers drawn from six of the largest websites advertising male sexual services in Australia, the role of the Internet in facilitating the normalisation of male sex work is discussed. Specifically we examine how engagement with the sex industry has been reconstituted in term of better informed consumer-seller decisions for both clients and sex workers. Rather than being seen as a 'deviant' activity, understood in terms of pathology or criminal activity, male sex work is increasingly presented as an everyday commodity in the market place. In this context, the management of risks associated with sex work has shifted from formalised social control to more informal practices conducted among online communities of clients and sex workers. We discuss the implications for health, legal and welfare responses within an empowerment paradigm.

  19. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  20. Joint eigenvector estimation from mutually anisotropic tensors improves susceptibility tensor imaging of the brain, kidney, and heart.

    Science.gov (United States)

    Dibb, Russell; Liu, Chunlei

    2017-06-01

    To develop a susceptibility-based MRI technique for probing microstructure and fiber architecture of magnetically anisotropic tissues-such as central nervous system white matter, renal tubules, and myocardial fibers-in three dimensions using susceptibility tensor imaging (STI) tools. STI can probe tissue microstructure, but is limited by reconstruction artifacts because of absent phase information outside the tissue and noise. STI accuracy may be improved by estimating a joint eigenvector from mutually anisotropic susceptibility and relaxation tensors. Gradient-recalled echo image data were simulated using a numerical phantom and acquired from the ex vivo mouse brain, kidney, and heart. Susceptibility tensor data were reconstructed using STI, regularized STI, and the proposed algorithm of mutually anisotropic and joint eigenvector STI (MAJESTI). Fiber map and tractography results from each technique were compared with diffusion tensor data. MAJESTI reduced the estimated susceptibility tensor orientation error by 30% in the phantom, 36% in brain white matter, 40% in the inner medulla of the kidney, and 45% in myocardium. This improved the continuity and consistency of susceptibility-based fiber tractography in each tissue. MAJESTI estimation of the susceptibility tensors yields lower orientation errors for susceptibility-based fiber mapping and tractography in the intact brain, kidney, and heart. Magn Reson Med 77:2331-2346, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  1. REPORTING SOCIETAL : LIMITES ET ENJEUX DE LA PROPOSITION DE NORMALISATION INTERNATIONALE " GLOBAL REPORTING INITIATIVE "

    OpenAIRE

    Michel Capron; Françoise Quairel

    2003-01-01

    International audience; En s'inspirant de la normalisation comptable anglo-saxonne, la Global Reporting Initiative (GRI) propose un référentiel de publication volontaire d'informations sociétales. La transposition présente des limites qui rendent en fait ses principes inapplicables. Néanmoins il tend à s'imposer et les grandes entreprises peuvent y trouver le moyen d'éviter une régulation contraignante.

  2. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. An adjoint-based scheme for eigenvalue error improvement

    International Nuclear Information System (INIS)

    Merton, S.R.; Smedley-Stevenson, R.P.; Pain, C.C.; El-Sheikh, A.H.; Buchan, A.G.

    2011-01-01

    A scheme for improving the accuracy and reducing the error in eigenvalue calculations is presented. Using a rst order Taylor series expansion of both the eigenvalue solution and the residual of the governing equation, an approximation to the error in the eigenvalue is derived. This is done using a convolution of the equation residual and adjoint solution, which is calculated in-line with the primal solution. A defect correction on the solution is then performed in which the approximation to the error is used to apply a correction to the eigenvalue. The method is shown to dramatically improve convergence of the eigenvalue. The equation for the eigenvalue is shown to simplify when certain normalizations are applied to the eigenvector. Two such normalizations are considered; the rst of these is a fission-source type of normalisation and the second is an eigenvector normalisation. Results are demonstrated on a number of demanding elliptic problems using continuous Galerkin weighted nite elements. Moreover, the correction scheme may also be applied to hyperbolic problems and arbitrary discretization. This is not limited to spatial corrections and may be used throughout the phase space of the discrete equation. The applied correction not only improves fidelity of the calculation, it allows assessment of the reliability of numerical schemes to be made and could be used to guide mesh adaption algorithms or to automate mesh generation schemes. (author)

  4. OpenPrescribing: normalised data and software tool to research trends in English NHS primary care prescribing 1998-2016.

    Science.gov (United States)

    Curtis, Helen J; Goldacre, Ben

    2018-02-23

    We aimed to compile and normalise England's national prescribing data for 1998-2016 to facilitate research on long-term time trends and create an open-data exploration tool for wider use. We compiled data from each individual year's national statistical publications and normalised them by mapping each drug to its current classification within the national formulary where possible. We created a freely accessible, interactive web tool to allow anyone to interact with the processed data. We downloaded all available annual prescription cost analysis datasets, which include cost and quantity for all prescription items dispensed in the community in England. Medical devices and appliances were excluded. We measured the extent of normalisation of data and aimed to produce a functioning accessible analysis tool. All data were imported successfully. 87.5% of drugs were matched exactly on name to the current formulary and a further 6.5% to similar drug names. All drugs in core clinical chapters were reconciled to their current location in the data schema, with only 1.26% of drugs not assigned a current chemical code. We created an openly accessible interactive tool to facilitate wider use of these data. Publicly available data can be made accessible through interactive online tools to help researchers and policy-makers explore time trends in prescribing. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Experimentation of Eigenvector Dynamics in a Multiple Input Multiple Output Channel in the 5GHz Band

    DEFF Research Database (Denmark)

    Brown, Tim; Eggers, Patrick Claus F.; Katz, Marcos

    2005-01-01

    Much research has been carried out on the production of both physical and non physical Multiple Input Multiple Output channel models with regard to increased channel capacity as well as analysis of eigenvalues through the use of singular value decomposition. Little attention has been paid...... to the analysis of vector dynamics in terms of how the state of eigenvectors will change as a mobile is moving through a changing physical environment. This is important in terms of being able to track the orthogonal eigenmodes at system level, while also relieving the burden of tracking of the full channel...

  6. Analysis of a simulated microarray dataset: Comparison of methods for data normalisation and detection of differential expression (Open Access publication

    Directory of Open Access Journals (Sweden)

    Mouzaki Daphné

    2007-11-01

    Full Text Available Abstract Microarrays allow researchers to measure the expression of thousands of genes in a single experiment. Before statistical comparisons can be made, the data must be assessed for quality and normalisation procedures must be applied, of which many have been proposed. Methods of comparing the normalised data are also abundant, and no clear consensus has yet been reached. The purpose of this paper was to compare those methods used by the EADGENE network on a very noisy simulated data set. With the a priori knowledge of which genes are differentially expressed, it is possible to compare the success of each approach quantitatively. Use of an intensity-dependent normalisation procedure was common, as was correction for multiple testing. Most variety in performance resulted from differing approaches to data quality and the use of different statistical tests. Very few of the methods used any kind of background correction. A number of approaches achieved a success rate of 95% or above, with relatively small numbers of false positives and negatives. Applying stringent spot selection criteria and elimination of data did not improve the false positive rate and greatly increased the false negative rate. However, most approaches performed well, and it is encouraging that widely available techniques can achieve such good results on a very noisy data set.

  7. AMDLIBF, IBM 360 Subroutine Library, Eigenvalues, Eigenvectors, Matrix Inversion

    International Nuclear Information System (INIS)

    Wang, Jesse Y.

    1980-01-01

    Description of problem or function: AMDLIBF is a subset of the IBM 360 Subroutine Library at the Applied Mathematics Division at Argonne. This subset includes library category F: Identification/Description: F152S F SYMINV: Invert sym. matrices, solve lin. systems; F154S A DOTP: Double plus precision accum. inner prod.; F156S F RAYCOR: Rayleigh corrections for eigenvalues; F161S F XTRADP: A fast extended precision inner product; F162S A XTRADP: Inner product of two DP real vectors; F202S F1 EIGEN: Eigen-system for real symmetric matrix; F203S F: Driver for F202S; F248S F RITZIT: Largest eigenvalue and vec. real sym. matrix; F261S F EIGINV: Inverse eigenvalue problem; F313S F CQZHES: Reduce cmplx matrices to upper Hess and tri; F314S F CQZVAL: Reduce complex matrix to upper Hess. form; F315S F CQZVEC: Eigenvectors of cmplx upper triang. syst.; F316S F CGG: Driver for complex general Eigen-problem; F402S F MATINV: Matrix inversion and sol. of linear eqns.; F403S F: Driver for F402S; F452S F CHOLLU,CHOLEQ: Sym. decomp. of pos. def. band matrices; F453S F MATINC: Inversion of complex matrices; F454S F CROUT: Solution of simultaneous linear equations; F455S F CROUTC: Sol. of simultaneous complex linear eqns.; F456S F1 DIAG: Integer preserving Gaussian elimination

  8. Living under the influence: normalisation of alcohol consumption in our cities

    Directory of Open Access Journals (Sweden)

    Xisca Sureda

    2017-01-01

    Full Text Available Harmful use of alcohol is one of the world's leading health risks. A positive association between certain characteristics of the urban environment and individual alcohol consumption has been documented in previous research. When developing a tool characterising the urban environment of alcohol in the cities of Barcelona and Madrid we observed that alcohol is ever present in our cities. Urban residents are constantly exposed to a wide variety of alcohol products, marketing and promotion and signs of alcohol consumption. In this field note, we reflect the normalisation of alcohol in urban environments. We highlight the need for further research to better understand attitudes and practices in relation to alcohol consumption. This type of urban studies is necessary to support policy interventions to prevent and control harmful alcohol use.

  9. Spin-orbit splitted excited states using explicitly-correlated equation-of-motion coupled-cluster singles and doubles eigenvectors

    Science.gov (United States)

    Bokhan, Denis; Trubnikov, Dmitrii N.; Perera, Ajith; Bartlett, Rodney J.

    2018-04-01

    An explicitly-correlated method of calculation of excited states with spin-orbit couplings, has been formulated and implemented. Developed approach utilizes left and right eigenvectors of equation-of-motion coupled-cluster model, which is based on the linearly approximated explicitly correlated coupled-cluster singles and doubles [CCSD(F12)] method. The spin-orbit interactions are introduced by using the spin-orbit mean field (SOMF) approximation of the Breit-Pauli Hamiltonian. Numerical tests for several atoms and molecules show good agreement between explicitly-correlated results and the corresponding values, calculated in complete basis set limit (CBS); the highly-accurate excitation energies can be obtained already at triple- ζ level.

  10. ReadqPCR and NormqPCR: R packages for the reading, quality checking and normalisation of RT-qPCR quantification cycle (Cq data

    Directory of Open Access Journals (Sweden)

    Perkins James R

    2012-07-01

    Full Text Available Abstract Background Measuring gene transcription using real-time reverse transcription polymerase chain reaction (RT-qPCR technology is a mainstay of molecular biology. Technologies now exist to measure the abundance of many transcripts in parallel. The selection of the optimal reference gene for the normalisation of this data is a recurring problem, and several algorithms have been developed in order to solve it. So far nothing in R exists to unite these methods, together with other functions to read in and normalise the data using the chosen reference gene(s. Results We have developed two R/Bioconductor packages, ReadqPCR and NormqPCR, intended for a user with some experience with high-throughput data analysis using R, who wishes to use R to analyse RT-qPCR data. We illustrate their potential use in a workflow analysing a generic RT-qPCR experiment, and apply this to a real dataset. Packages are available from http://www.bioconductor.org/packages/release/bioc/html/ReadqPCR.htmland http://www.bioconductor.org/packages/release/bioc/html/NormqPCR.html Conclusions These packages increase the repetoire of RT-qPCR analysis tools available to the R user and allow them to (amongst other things read their data into R, hold it in an ExpressionSet compatible R object, choose appropriate reference genes, normalise the data and look for differential expression between samples.

  11. Four weeks of near-normalisation of blood glucose improves the insulin response to glucagon-like peptide-1 and glucose-dependent insulinotropic polypeptide in patients with type 2 diabetes

    DEFF Research Database (Denmark)

    Højberg, P V; Vilsbøll, T; Rabøl, R

    2008-01-01

    of near-normalisation of the blood glucose level could improve insulin responses to GIP and GLP-1 in patients with type 2 diabetes. METHODS: Eight obese patients with type 2 diabetes with poor glycaemic control (HbA(1c) 8.6 +/- 1.3%), were investigated before and after 4 weeks of near......-normalisation of blood glucose (mean blood glucose 7.4 +/- 1.2 mmol/l) using insulin treatment. Before and after insulin treatment the participants underwent three hyperglycaemic clamps (15 mmol/l) with infusion of GLP-1, GIP or saline. Insulin responses were evaluated as the incremental area under the plasma C......-peptide curve. RESULTS: Before and after near-normalisation of blood glucose, the C-peptide responses did not differ during the early phase of insulin secretion (0-10 min). The late phase C-peptide response (10-120 min) increased during GIP infusion from 33.0 +/- 8.5 to 103.9 +/- 24.2 (nmol/l) x (110 min)(-1...

  12. Preoperative mapping of cortical language areas in adult brain tumour patients using PET and individual non-normalised SPM analyses

    International Nuclear Information System (INIS)

    Meyer, Philipp T.; Sturz, Laszlo; Schreckenberger, Mathias; Setani, Keyvan S.; Buell, Udalrich; Spetzger, Uwe; Meyer, Georg F.; Sabri, Osama

    2003-01-01

    In patients scheduled for the resection of perisylvian brain tumours, knowledge of the cortical topography of language functions is crucial in order to avoid neurological deficits. We investigated the applicability of statistical parametric mapping (SPM) without stereotactic normalisation for individual preoperative language function brain mapping using positron emission tomography (PET). Seven right-handed adult patients with left-sided brain tumours (six frontal and one temporal) underwent 12 oxygen-15 labelled water PET scans during overt verb generation and rest. Individual activation maps were calculated for P<0.005 and P<0.001 without anatomical normalisation and overlaid onto the individuals' magnetic resonance images for preoperative planning. Activations corresponding to Broca's and Wernicke's areas were found in five and six cases, respectively, for P<0.005 and in three and six cases, respectively, for P<0.001. One patient with a glioma located in the classical Broca's area without aphasic symptoms presented an activation of the adjacent inferior frontal cortex and of a right-sided area homologous to Broca's area. Four additional patients with left frontal tumours also presented activations of the right-sided Broca's homologue; two of these showed aphasic symptoms and two only a weak or no activation of Broca's area. Other frequently observed activations included bilaterally the superior temporal gyri, prefrontal cortices, anterior insulae, motor areas and the cerebellum. The middle and inferior temporal gyri were activated predominantly on the left. An SPM group analysis (P<0.05, corrected) in patients with left frontal tumours confirmed the activation pattern shown by the individual analyses. We conclude that SPM analyses without stereotactic normalisation offer a promising alternative for analysing individual preoperative language function brain mapping studies. The observed right frontal activations agree with proposed reorganisation processes, but

  13. Calculation of normalised organ and effective doses to adult reference computational phantoms from contemporary computed tomography scanners

    International Nuclear Information System (INIS)

    Jansen, Jan T.M.; Shrimpton, Paul C.

    2010-01-01

    The general-purpose Monte Carlo radiation transport code MCNPX has been used to simulate photon transport and energy deposition in anthropomorphic phantoms due to the x-ray exposure from the Philips iCT 256 and Siemens Definition CT scanners, together with the previously studied General Electric 9800. The MCNPX code was compiled with the Intel FORTRAN compiler and run on a Linux PC cluster. A patch has been successfully applied to reduce computing times by about 4%. The International Commission on Radiological Protection (ICRP) has recently published the Adult Male (AM) and Adult Female (AF) reference computational voxel phantoms as successors to the Medical Internal Radiation Dose (MIRD) stylised hermaphrodite mathematical phantoms that form the basis for the widely-used ImPACT CT dosimetry tool. Comparisons of normalised organ and effective doses calculated for a range of scanner operating conditions have demonstrated significant differences in results (in excess of 30%) between the voxel and mathematical phantoms as a result of variations in anatomy. These analyses illustrate the significant influence of choice of phantom on normalised organ doses and the need for standardisation to facilitate comparisons of dose. Further such dose simulations are needed in order to update the ImPACT CT Patient Dosimetry spreadsheet for contemporary CT practice. (author)

  14. The contribution of online content to the promotion and normalisation of female genital cosmetic surgery: a systematic review of the literature.

    Science.gov (United States)

    Mowat, Hayley; McDonald, Karalyn; Dobson, Amy Shields; Fisher, Jane; Kirkman, Maggie

    2015-11-25

    Women considering female genital cosmetic surgery (FGCS) are likely to use the internet as a key source of information during the decision-making process. The aim of this systematic review was to determine what is known about the role of the internet in the promotion and normalisation of female genital cosmetic surgery and to identify areas for future research. Eight social science, medical, and communication databases and Google Scholar were searched for peer-reviewed papers published in English. Results from all papers were analysed to identify recurring and unique themes. Five papers met inclusion criteria. Three of the papers reported investigations of website content of FGCS providers, a fourth compared motivations for labiaplasty publicised on provider websites with those disclosed by women in online communities, and the fifth analysed visual depictions of female genitalia in online pornography. Analysis yielded five significant and interrelated patterns of representation, each functioning to promote and normalise the practice of FGCS: pathologisation of genital diversity; female genital appearance as important to wellbeing; characteristics of women's genitals are important for sex life; female body as degenerative and improvable through surgery; and FGCS as safe, easy, and effective. A significant gap was identified in the literature: the ways in which user-generated content might function to perpetuate, challenge, or subvert the normative discourses prevalent in online pornography and surgical websites. Further research is needed to contribute to knowledge of the role played by the internet in the promotion and normalisation of female genital cosmetic surgery.

  15. Decaying states as complex energy eigenvectors in generalized quantum mechanics

    International Nuclear Information System (INIS)

    Sudarshan, E.C.G.; Chiu, C.B.; Gorini, V.

    1977-04-01

    The problem of particle decay is reexamined within the Hamiltonian formalism. By deforming contours of integration, the survival amplitude is expressed as a sum of purely exponential contributions arising from the simple poles of the resolvent on the second sheet plus a background integral along a complex contour GAMMA running below the location of the poles. One observes that the time dependence of the survival amplitude in the small time region is strongly correlated to the asymptotic behaviour of the energy spectrum of the system; one computes the small time behavior of the survival amplitude for a wide variety of asymptotic behaviors. In the special case of the Lee model, using a formal procedure of analytic continuation, it is shown that a complete set of complex energy eigenvectors of the Hamiltonian can be associated with the poles of the resolvent of the background contour GAMMA. These poles and points along GAMMA correspond to the discrete and the continuum states respectively. In this context, each unstable particle is associated with a well defined object, which is a discrete generalized eigenstate of the Hamiltonian having a complex eigenvalue, with its real and negative imaginary parts being the mass and half width of the particle respectively. Finally, one briefly discusses the analytic continuation of the scattering amplitude within this generalized scheme, and notes the appearance of ''redundant poles'' which do not correspond to discrete solutions of the modified eigenvalue problem

  16. ENEKuS--A Key Model for Managing the Transformation of the Normalisation of the Basque Language in the Workplace

    Science.gov (United States)

    Marko, Inazio; Pikabea, Inaki

    2013-01-01

    The aim of this study is to develop a reference model for intervention in the language processes applied to the transformation of language normalisation within organisations of a socio-economic nature. It is based on a case study of an experiment carried out over 10 years within a trade union confederation, and has pursued a strategy of a…

  17. Living under the influence: normalisation of alcohol consumption in our cities.

    Science.gov (United States)

    Sureda, Xisca; Villalbí, Joan R; Espelt, Albert; Franco, Manuel

    Harmful use of alcohol is one of the world's leading health risks. A positive association between certain characteristics of the urban environment and individual alcohol consumption has been documented in previous research. When developing a tool characterising the urban environment of alcohol in the cities of Barcelona and Madrid we observed that alcohol is ever present in our cities. Urban residents are constantly exposed to a wide variety of alcohol products, marketing and promotion and signs of alcohol consumption. In this field note, we reflect the normalisation of alcohol in urban environments. We highlight the need for further research to better understand attitudes and practices in relation to alcohol consumption. This type of urban studies is necessary to support policy interventions to prevent and control harmful alcohol use. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Normalised subband adaptive filtering with extended adaptiveness on degree of subband filters

    Science.gov (United States)

    Samuyelu, Bommu; Rajesh Kumar, Pullakura

    2017-12-01

    This paper proposes an adaptive normalised subband adaptive filtering (NSAF) to accomplish the betterment of NSAF performance. In the proposed NSAF, an extended adaptiveness is introduced from its variants in two ways. In the first way, the step-size is set adaptive, and in the second way, the selection of subbands is set adaptive. Hence, the proposed NSAF is termed here as variable step-size-based NSAF with selected subbands (VS-SNSAF). Experimental investigations are carried out to demonstrate the performance (in terms of convergence) of the VS-SNSAF against the conventional NSAF and its state-of-the-art adaptive variants. The results report the superior performance of VS-SNSAF over the traditional NSAF and its variants. It is also proved for its stability, robustness against noise and substantial computing complexity.

  19. Good quality of oral anticoagulation treatment in general practice using international normalised ratio point of care testing

    DEFF Research Database (Denmark)

    Løkkegaard, Thomas; Pedersen, Tina Heidi; Lind, Bent

    2015-01-01

    INTRODUCTION: Oral anticoagulation treatment (OACT) with warfarin is common in general practice. Increasingly, international normalised ratio (INR) point of care testing (POCT) is being used to manage patients. The aim of this study was to describe and analyse the quality of OACT with warfarin...... practices using INR POCT in the management of patients in warfarin treatment provided good quality of care. Sampling interval and diagnostic coding were significantly correlated with treatment quality....

  20. The Application of Principal Component Analysis Using Fixed Eigenvectors to the Infrared Thermographic Inspection of the Space Shuttle Thermal Protection System

    Science.gov (United States)

    Cramer, K. Elliott; Winfree, William P.

    2006-01-01

    The Nondestructive Evaluation Sciences Branch at NASA s Langley Research Center has been actively involved in the development of thermographic inspection techniques for more than 15 years. Since the Space Shuttle Columbia accident, NASA has focused on the improvement of advanced NDE techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter s wing leading edge. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can be used to inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. A typical implementation of PCA is when the eigenvectors are generated from the data set being analyzed. Although it is a powerful tool for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the good material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued when a fixed set of eigenvectors is used to process the thermal data from the RCC materials. These eigen vectors can be generated either from an analytic model of the thermal response of the material under examination, or from a large cross section of experimental data. This paper will provide the

  1. Good quality of oral anticoagulation treatment in general practice using international normalised ratio point of care testing

    DEFF Research Database (Denmark)

    Løkkegaard, Thomas; Pedersen, Tina Heidi; Lind, Bent

    2015-01-01

    INTRODUCTION: Oral anticoagulation treatment (OACT)with warfarin is common in general practice. Increasingly,international normalised ratio (INR) point of care testing(POCT) is being used to manage patients. The aim of thisstudy was to describe and analyse the quality of OACT withwarfarin...... in the management of patients in warfarintreatment provided good quality of care. Sampling intervaland diagnostic coding were significantly correlated withtreatment quality. FUNDING: The study received financial support from theSarah Krabbe Foundation, the General Practitioners’ Educationand Development Foundation...

  2. 18S rRNA is a reliable normalisation gene for real time PCR based on influenza virus infected cells

    Directory of Open Access Journals (Sweden)

    Kuchipudi Suresh V

    2012-10-01

    Full Text Available Abstract Background One requisite of quantitative reverse transcription PCR (qRT-PCR is to normalise the data with an internal reference gene that is invariant regardless of treatment, such as virus infection. Several studies have found variability in the expression of commonly used housekeeping genes, such as beta-actin (ACTB and glyceraldehyde-3-phosphate dehydrogenase (GAPDH, under different experimental settings. However, ACTB and GAPDH remain widely used in the studies of host gene response to virus infections, including influenza viruses. To date no detailed study has been described that compares the suitability of commonly used housekeeping genes in influenza virus infections. The present study evaluated several commonly used housekeeping genes [ACTB, GAPDH, 18S ribosomal RNA (18S rRNA, ATP synthase, H+ transporting, mitochondrial F1 complex, beta polypeptide (ATP5B and ATP synthase, H+ transporting, mitochondrial Fo complex, subunit C1 (subunit 9 (ATP5G1] to identify the most stably expressed gene in human, pig, chicken and duck cells infected with a range of influenza A virus subtypes. Results The relative expression stability of commonly used housekeeping genes were determined in primary human bronchial epithelial cells (HBECs, pig tracheal epithelial cells (PTECs, and chicken and duck primary lung-derived cells infected with five influenza A virus subtypes. Analysis of qRT-PCR data from virus and mock infected cells using NormFinder and BestKeeper software programmes found that 18S rRNA was the most stable gene in HBECs, PTECs and avian lung cells. Conclusions Based on the presented data from cell culture models (HBECs, PTECs, chicken and duck lung cells infected with a range of influenza viruses, we found that 18S rRNA is the most stable reference gene for normalising qRT-PCR data. Expression levels of the other housekeeping genes evaluated in this study (including ACTB and GPADH were highly affected by influenza virus infection and

  3. Eigenvector/eigenvalue analysis of a 3D current referential fault detection and diagnosis of an induction motor

    International Nuclear Information System (INIS)

    Pires, V. Fernao; Martins, J.F.; Pires, A.J.

    2010-01-01

    In this paper an integrated approach for on-line induction motor fault detection and diagnosis is presented. The need to insure a continuous and safety operation for induction motors involves preventive maintenance procedures combined with fault diagnosis techniques. The proposed approach uses an automatic three step algorithm. Firstly, the induction motor stator currents are measured which will give typical patterns that can be used to identify the fault. Secondly, the eigenvectors/eigenvalues of the 3D current referential are computed. Finally the proposed algorithm will discern if the motor is healthy or not and report the extent of the fault. Furthermore this algorithm is able to identify distinct faults (stator winding faults or broken bars). The proposed approach was experimentally implemented and its performance verified on various types of working conditions.

  4. Inference of financial networks using the normalised mutual information rate

    Science.gov (United States)

    2018-01-01

    In this paper, we study data from financial markets, using the normalised Mutual Information Rate. We show how to use it to infer the underlying network structure of interrelations in the foreign currency exchange rates and stock indices of 15 currency areas. We first present the mathematical method and discuss its computational aspects, and apply it to artificial data from chaotic dynamics and to correlated normal-variates data. We then apply the method to infer the structure of the financial system from the time-series of currency exchange rates and stock indices. In particular, we study and reveal the interrelations among the various foreign currency exchange rates and stock indices in two separate networks, of which we also study their structural properties. Our results show that both inferred networks are small-world networks, sharing similar properties and having differences in terms of assortativity. Importantly, our work shows that global economies tend to connect with other economies world-wide, rather than creating small groups of local economies. Finally, the consistent interrelations depicted among the 15 currency areas are further supported by a discussion from the viewpoint of economics. PMID:29420644

  5. Inference of financial networks using the normalised mutual information rate.

    Science.gov (United States)

    Goh, Yong Kheng; Hasim, Haslifah M; Antonopoulos, Chris G

    2018-01-01

    In this paper, we study data from financial markets, using the normalised Mutual Information Rate. We show how to use it to infer the underlying network structure of interrelations in the foreign currency exchange rates and stock indices of 15 currency areas. We first present the mathematical method and discuss its computational aspects, and apply it to artificial data from chaotic dynamics and to correlated normal-variates data. We then apply the method to infer the structure of the financial system from the time-series of currency exchange rates and stock indices. In particular, we study and reveal the interrelations among the various foreign currency exchange rates and stock indices in two separate networks, of which we also study their structural properties. Our results show that both inferred networks are small-world networks, sharing similar properties and having differences in terms of assortativity. Importantly, our work shows that global economies tend to connect with other economies world-wide, rather than creating small groups of local economies. Finally, the consistent interrelations depicted among the 15 currency areas are further supported by a discussion from the viewpoint of economics.

  6. Normalisation in product life cycle assessment: an LCA of the global and European economic systems in the year 2000.

    Science.gov (United States)

    Sleeswijk, Anneke Wegener; van Oers, Lauran F C M; Guinée, Jeroen B; Struijs, Jaap; Huijbregts, Mark A J

    2008-02-01

    In the methodological context of the interpretation of environmental life cycle assessment (LCA) results, a normalisation study was performed. 15 impact categories were accounted for, including climate change, acidification, eutrophication, human toxicity, ecotoxicity, depletion of fossil energy resources, and land use. The year 2000 was chosen as a reference year, and information was gathered on two spatial levels: the global and the European level. From the 860 environmental interventions collected, 48 interventions turned out to account for at least 75% of the impact scores of all impact categories. All non-toxicity related, emission dependent impacts are fully dominated by the bulk emissions of only 10 substances or substance groups: CO(2), CH(4), SO(2), NO(x), NH(3), PM(10), NMVOC, and (H)CFCs emissions to air and emissions of N- and P-compounds to fresh water. For the toxicity-related emissions (pesticides, organics, metal compounds and some specific inorganics), the availability of information was still very limited, leading to large uncertainty in the corresponding normalisation factors. Apart from their usefulness as a reference for LCA studies, the results of this study stress the importance of efficient measures to combat bulk emissions and to promote the registration of potentially toxic emissions on a more comprehensive scale.

  7. No upward trend in normalised windstorm losses in Europe: 1970-2008

    Science.gov (United States)

    Barredo, J. I.

    2010-01-01

    On 18 January 2007, windstorm Kyrill battered Europe with hurricane-force winds killing 47 people and causing 10 billion US in damage. Kyrill poses several questions: is Kyrill an isolated or exceptional case? Have there been events costing as much in the past? This paper attempts to put Kyrill into an historical context by examining large historical windstorm event losses in Europe for the period 1970-2008 across 29 European countries. It asks the question what economic losses would these historical events cause if they were to recur under 2008 societal conditions? Loss data were sourced from reinsurance firms and augmented with historical reports, peer-reviewed articles and other ancillary sources. Following the same conceptual approach outlined in previous studies, the data were then adjusted for changes in population, wealth, and inflation at the country level and for inter-country price differences using purchasing power parity. The analyses reveal no trend in the normalised windstorm losses and confirm increasing disaster losses are driven by societal factors and increasing exposure.

  8. Good quality of oral anticoagulation treatment in general practice using international normalised ratio point of care testing

    DEFF Research Database (Denmark)

    Løkkegaard, Thomas; Pedersen, Tina Heidi; Lind, Bent

    2015-01-01

    collected retrospectively for a period of six months. For each patient, time in therapeutic range (TTR) was calculated and correlated with practice and patient characteristics using multilevel linear regression models. RESULTS: We identified 447 patients in warfarin treatment in the 20 practices using POCT......INTRODUCTION: Oral anticoagulation treatment (OACT) with warfarin is common in general practice. Increasingly, international normalised ratio (INR) point of care testing (POCT) is being used to manage patients. The aim of this study was to describe and analyse the quality of OACT with warfarin...

  9. EISPACK, Subroutines for Eigenvalues, Eigenvectors, Matrix Operations

    International Nuclear Information System (INIS)

    Garbow, Burton S.; Cline, A.K.; Meyering, J.

    1993-01-01

    1 - Description of problem or function: EISPACK3 is a collection of 75 FORTRAN subroutines, both single- and double-precision, that compute the eigenvalues and eigenvectors of nine classes of matrices. The package can determine the Eigen-system of complex general, complex Hermitian, real general, real symmetric, real symmetric band, real symmetric tridiagonal, special real tridiagonal, generalized real, and generalized real symmetric matrices. In addition, there are two routines which use the singular value decomposition to solve certain least squares problem. The individual subroutines are - Identification/Description: BAKVEC: Back transform vectors of matrix formed by FIGI; BALANC: Balance a real general matrix; BALBAK: Back transform vectors of matrix formed by BALANC; BANDR: Reduce sym. band matrix to sym. tridiag. matrix; BANDV: Find some vectors of sym. band matrix; BISECT: Find some values of sym. tridiag. matrix; BQR: Find some values of sym. band matrix; CBABK2: Back transform vectors of matrix formed by CBAL; CBAL: Balance a complex general matrix; CDIV: Perform division of two complex quantities; CG: Driver subroutine for a complex general matrix; CH: Driver subroutine for a complex Hermitian matrix; CINVIT: Find some vectors of complex Hess. matrix; COMBAK: Back transform vectors of matrix formed by COMHES; COMHES: Reduce complex matrix to complex Hess. (elementary); COMLR: Find all values of complex Hess. matrix (LR); COMLR2: Find all values/vectors of cmplx Hess. matrix (LR); CCMQR: Find all values of complex Hessenberg matrix (QR); COMQR2: Find all values/vectors of cmplx Hess. matrix (QR); CORTB: Back transform vectors of matrix formed by CORTH; CORTH: Reduce complex matrix to complex Hess. (unitary); CSROOT: Find square root of complex quantity; ELMBAK: Back transform vectors of matrix formed by ELMHES; ELMHES: Reduce real matrix to real Hess. (elementary); ELTRAN: Accumulate transformations from ELMHES (for HQR2); EPSLON: Estimate unit roundoff

  10. Microstructural characterisation of a P91 steel normalised and tempered at different temperatures

    International Nuclear Information System (INIS)

    Hurtado-Norena, C.; Danon, C.A.; Luppo, M.I.; Bruzzoni, P.

    2015-01-01

    9%Cr-1%Mo martensitic-ferritic steels are used in power plant components with operating temperatures of around 600 deg. C because of their good mechanical properties at high temperature as well as good oxidation resistance. These steels are generally used in the normalised and tempered condition. This treatment results in a structure of tempered lath martensite where the precipitates are distributed along the lath interfaces and within the martensite laths. The characterisation of these precipitates is of fundamental importance because of their relationship with the creep behaviour of these steels in service. In the present work, the different types of precipitates found in these steels have been studied on specimens in different metallurgical conditions. The techniques used in this investigation were X-ray diffraction with synchrotron light, scanning electron microscopy, energy dispersive microanalysis and transmission electron microscopy. (authors)

  11. Disrupted Brain Network in Progressive Mild Cognitive Impairment Measured by Eigenvector Centrality Mapping is Linked to Cognition and Cerebrospinal Fluid Biomarkers.

    Science.gov (United States)

    Qiu, Tiantian; Luo, Xiao; Shen, Zhujing; Huang, Peiyu; Xu, Xiaojun; Zhou, Jiong; Zhang, Minming

    2016-10-18

    Mild cognitive impairment (MCI) is a heterogeneous condition associated with a high risk of progressing to Alzheimer's disease (AD). Although functional brain network alterations have been observed in progressive MCI (pMCI), the underlying pathological mechanisms of network alterations remain unclear. In the present study, we evaluated neuropsychological, imaging, and cerebrospinal fluid (CSF) data at baseline across a cohort of: 21 pMCI patients, 33 stable MCI (sMCI) patients, and 29 normal controls. Fast eigenvector centrality mapping (fECM) based on resting-state functional MRI (rsfMRI) was used to investigate brain network organization differences among these groups, and we further assessed its relation to cognition and AD-related pathology. Our results demonstrated that pMCI had decreased eigenvector centrality (EC) in left temporal pole and parahippocampal gyrus, and increased EC in left middle frontal gyrus compared to sMCI. In addition, compared to normal controls, patients with pMCI showed decreased EC in right hippocampus and bilateral parahippocampal gyrus, and sMCI had decreased EC in right middle frontal gyrus and superior parietal lobule. Correlation analysis showed that EC in the left temporal pole was related to Wechsler Memory Scale-Revised Logical Memory (WMS-LM) delay score (r = 0.467, p = 0.044) and total tau (t-tau) level in CSF (r = -0.509, p = 0.026) in pMCI. Our findings implicate EC changes of different brain network nodes in the prognosis of pMCI and sMCI. Importantly, the association between decreased EC of brain network node and pathological changes may provide a deeper understanding of the underlying pathophysiology of pMCI.

  12. Selection of reference genes for normalisation of real-time RT-PCR in brain-stem death injury in Ovis aries

    Directory of Open Access Journals (Sweden)

    Fraser John F

    2009-07-01

    Full Text Available Abstract Background Heart and lung transplantation is frequently the only therapeutic option for patients with end stage cardio respiratory disease. Organ donation post brain stem death (BSD is a pre-requisite, yet BSD itself causes such severe damage that many organs offered for donation are unusable, with lung being the organ most affected by BSD. In Australia and New Zealand, less than 50% of lungs offered for donation post BSD are suitable for transplantation, as compared with over 90% of kidneys, resulting in patients dying for lack of suitable lungs. Our group has developed a novel 24 h sheep BSD model to mimic the physiological milieu of the typical human organ donor. Characterisation of the gene expression changes associated with BSD is critical and will assist in determining the aetiology of lung damage post BSD. Real-time PCR is a highly sensitive method involving multiple steps from extraction to processing RNA so the choice of housekeeping genes is important in obtaining reliable results. Little information however, is available on the expression stability of reference genes in the sheep pulmonary artery and lung. We aimed to establish a set of stably expressed reference genes for use as a standard for analysis of gene expression changes in BSD. Results We evaluated the expression stability of 6 candidate normalisation genes (ACTB, GAPDH, HGPRT, PGK1, PPIA and RPLP0 using real time quantitative PCR. There was a wide range of Ct-values within each tissue for pulmonary artery (15–24 and lung (16–25 but the expression pattern for each gene was similar across the two tissues. After geNorm analysis, ACTB and PPIA were shown to be the most stably expressed in the pulmonary artery and ACTB and PGK1 in the lung tissue of BSD sheep. Conclusion Accurate normalisation is critical in obtaining reliable and reproducible results in gene expression studies. This study demonstrates tissue associated variability in the selection of these

  13. The one-dimensional normalised generalised equivalence theory (NGET) for generating equivalent diffusion theory group constants for PWR reflector regions

    International Nuclear Information System (INIS)

    Mueller, E.Z.

    1991-01-01

    An equivalent diffusion theory PWR reflector model is presented, which has as its basis Smith's generalisation of Koebke's Equivalent Theory. This method is an adaptation, in one-dimensional slab geometry, of the Generalised Equivalence Theory (GET). Since the method involves the renormalisation of the GET discontinuity factors at nodal interfaces, it is called the Normalised Generalised Equivalence Theory (NGET) method. The advantages of the NGET method for modelling the ex-core nodes of a PWR are summarized. 23 refs

  14. The moral experience of illness and its impact on normalisation: Examples from narratives with Punjabi women living with rheumatoid arthritis in the UK.

    Science.gov (United States)

    Sanderson, Tessa; Calnan, Michael; Kumar, Kanta

    2015-11-01

    The moral component of living with illness has been neglected in analyses of long-term illness experiences. This article attempts to fill this gap by exploring the role of the moral experience of illness in mediating the ability of those living with a long-term condition (LTC) to normalise. This is explored through an empirical study of women of Punjabi origin living with rheumatoid arthritis (RA) in the UK. Sixteen informants were recruited through three hospitals in UK cities and interviews conducted and analysed using a grounded theory approach. The intersection between moral experience and normalisation, within the broader context of ethnic, gender and socioeconomic influences, was evident in the following: disruption of a core lived value (the centrality of family duty), beliefs about illness causation affecting informants' 'moral career', and perceived discrimination in the workplace. The data illustrate the importance of considering an ethnic community's specific values and beliefs when understanding differences in adapting to LTCs and changing identities. © 2015 Foundation for the Sociology of Health & Illness.

  15. The stories we tell: qualitative research interviews, talking technologies and the 'normalisation' of life with HIV.

    Science.gov (United States)

    Mazanderani, Fadhila; Paparini, Sara

    2015-04-01

    Since the earliest days of the HIV/AIDS epidemic, talking about the virus has been a key way affected communities have challenged the fear and discrimination directed against them and pressed for urgent medical and political attention. Today, HIV/AIDS is one of the most prolifically and intimately documented of all health conditions, with entrenched infrastructures, practices and technologies--what Vinh-Kim Nguyen has dubbed 'confessional technologies'--aimed at encouraging those affected to share their experiences. Among these technologies, we argue, is the semi-structured interview: the principal methodology used in qualitative social science research focused on patient experiences. Taking the performative nature of the research interview as a talking technology seriously has epistemological implications not merely for how we interpret interview data, but also for how we understand the role of research interviews in the enactment of 'life with HIV'. This paper focuses on one crucial aspect of this enactment: the contemporary 'normalisation' of HIV as 'just another' chronic condition--a process taking place at the level of individual subjectivities, social identities, clinical practices and global health policy, and of which social science research is a vital part. Through an analysis of 76 interviews conducted in London (2009-10), we examine tensions in the experiential narratives of individuals living with HIV in which life with the virus is framed as 'normal', yet where this 'normality' is beset with contradictions and ambiguities. Rather than viewing these as a reflection of resistances to or failures of the enactment of HIV as 'normal', we argue that, insofar as these contradictions are generated by the research interview as a distinct 'talking technology', they emerge as crucial to the normative (re)production of what counts as 'living with HIV' (in the UK) and are an inherent part of the broader performative 'normalisation' of the virus. Copyright © 2015

  16. Analysis of structural correlations in a model binary 3D liquid through the eigenvalues and eigenvectors of the atomic stress tensors

    International Nuclear Information System (INIS)

    Levashov, V. A.

    2016-01-01

    It is possible to associate with every atom or molecule in a liquid its own atomic stress tensor. These atomic stress tensors can be used to describe liquids’ structures and to investigate the connection between structural and dynamic properties. In particular, atomic stresses allow to address atomic scale correlations relevant to the Green-Kubo expression for viscosity. Previously correlations between the atomic stresses of different atoms were studied using the Cartesian representation of the stress tensors or the representation based on spherical harmonics. In this paper we address structural correlations in a 3D model binary liquid using the eigenvalues and eigenvectors of the atomic stress tensors. This approach allows to interpret correlations relevant to the Green-Kubo expression for viscosity in a simple geometric way. On decrease of temperature the changes in the relevant stress correlation function between different atoms are significantly more pronounced than the changes in the pair density function. We demonstrate that this behaviour originates from the orientational correlations between the eigenvectors of the atomic stress tensors. We also found correlations between the eigenvalues of the same atomic stress tensor. For the studied system, with purely repulsive interactions between the particles, the eigenvalues of every atomic stress tensor are positive and they can be ordered: λ 1 ≥ λ 2 ≥ λ 3 ≥ 0. We found that, for the particles of a given type, the probability distributions of the ratios (λ 2 /λ 1 ) and (λ 3 /λ 2 ) are essentially identical to each other in the liquids state. We also found that λ 2 tends to be equal to the geometric average of λ 1 and λ 3 . In our view, correlations between the eigenvalues may represent “the Poisson ratio effect” at the atomic scale.

  17. Analysis of structural correlations in a model binary 3D liquid through the eigenvalues and eigenvectors of the atomic stress tensors

    Energy Technology Data Exchange (ETDEWEB)

    Levashov, V. A. [Technological Design Institute of Scientific Instrument Engineering, Novosibirsk 630058 (Russian Federation)

    2016-03-07

    It is possible to associate with every atom or molecule in a liquid its own atomic stress tensor. These atomic stress tensors can be used to describe liquids’ structures and to investigate the connection between structural and dynamic properties. In particular, atomic stresses allow to address atomic scale correlations relevant to the Green-Kubo expression for viscosity. Previously correlations between the atomic stresses of different atoms were studied using the Cartesian representation of the stress tensors or the representation based on spherical harmonics. In this paper we address structural correlations in a 3D model binary liquid using the eigenvalues and eigenvectors of the atomic stress tensors. This approach allows to interpret correlations relevant to the Green-Kubo expression for viscosity in a simple geometric way. On decrease of temperature the changes in the relevant stress correlation function between different atoms are significantly more pronounced than the changes in the pair density function. We demonstrate that this behaviour originates from the orientational correlations between the eigenvectors of the atomic stress tensors. We also found correlations between the eigenvalues of the same atomic stress tensor. For the studied system, with purely repulsive interactions between the particles, the eigenvalues of every atomic stress tensor are positive and they can be ordered: λ{sub 1} ≥ λ{sub 2} ≥ λ{sub 3} ≥ 0. We found that, for the particles of a given type, the probability distributions of the ratios (λ{sub 2}/λ{sub 1}) and (λ{sub 3}/λ{sub 2}) are essentially identical to each other in the liquids state. We also found that λ{sub 2} tends to be equal to the geometric average of λ{sub 1} and λ{sub 3}. In our view, correlations between the eigenvalues may represent “the Poisson ratio effect” at the atomic scale.

  18. Rational parametrisation of normalised Stiefel manifolds, and explicit non-'t Hooft solutions of the Atiyah-Drinfeld-Hitchin-Manin instanton matrix equations for Sp(n)

    International Nuclear Information System (INIS)

    McCarthy, P.J.

    1981-01-01

    It is proved that normalised Stiefel manifolds admit a rational parametrisation which generalises Cayley's parametrisation of the unitary groups. Applying (the quaternionic case of) this parametrisation to the Atiyah-Drinfeld-Hitchin-Manin (ADHM) instanton matrix equations, large families of new explicit rational solutions emerge. In particular, new explicit non-'t Hooft solutions are presented. (orig.)

  19. Normalised Mutual Information of High-Density Surface Electromyography during Muscle Fatigue

    Directory of Open Access Journals (Sweden)

    Adrian Bingham

    2017-12-01

    Full Text Available This study has developed a technique for identifying the presence of muscle fatigue based on the spatial changes of the normalised mutual information (NMI between multiple high density surface electromyography (HD-sEMG channels. Muscle fatigue in the tibialis anterior (TA during isometric contractions at 40% and 80% maximum voluntary contraction levels was investigated in ten healthy participants (Age range: 21 to 35 years; Mean age = 26 years; Male = 4, Female = 6. HD-sEMG was used to record 64 channels of sEMG using a 16 by 4 electrode array placed over the TA. The NMI of each electrode with every other electrode was calculated to form an NMI distribution for each electrode. The total NMI for each electrode (the summation of the electrode’s NMI distribution highlighted regions of high dependence in the electrode array and was observed to increase as the muscle fatigued. To summarise this increase, a function, M(k, was defined and was found to be significantly affected by fatigue and not by contraction force. The technique discussed in this study has overcome issues regarding electrode placement and was used to investigate how the dependences between sEMG signals within the same muscle change spatially during fatigue.

  20. Aberrant brain responses to emotionally valent words is normalised after cognitive behavioural therapy in female depressed adolescents.

    Science.gov (United States)

    Chuang, Jie-Yu; J Whitaker, Kirstie; Murray, Graham K; Elliott, Rebecca; Hagan, Cindy C; Graham, Julia Me; Ooi, Cinly; Tait, Roger; Holt, Rosemary J; van Nieuwenhuizen, Adrienne O; Reynolds, Shirley; Wilkinson, Paul O; Bullmore, Edward T; Lennox, Belinda R; Sahakian, Barbara J; Goodyer, Ian; Suckling, John

    2016-01-01

    Depression in adolescence is debilitating with high recurrence in adulthood, yet its pathophysiological mechanism remains enigmatic. To examine the interaction between emotion, cognition and treatment, functional brain responses to sad and happy distractors in an affective go/no-go task were explored before and after Cognitive Behavioural Therapy (CBT) in depressed female adolescents, and healthy participants. Eighty-two Depressed and 24 healthy female adolescents, aged 12-17 years, performed a functional magnetic resonance imaging (fMRI) affective go/no-go task at baseline. Participants were instructed to withhold their responses upon seeing happy or sad words. Among these participants, 13 patients had CBT over approximately 30 weeks. These participants and 20 matched controls then repeated the task. At baseline, increased activation in response to happy relative to neutral distractors was observed in the orbitofrontal cortex in depressed patients which was normalised after CBT. No significant group differences were found behaviourally or in brain activation in response to sad distractors. Improvements in symptoms (mean: 9.31, 95% CI: 5.35-13.27) were related at trend-level to activation changes in orbitofrontal cortex. In the follow-up section, a limited number of post-CBT patients were recruited. To our knowledge, this is the first fMRI study addressing the effect of CBT in adolescent depression. Although a bias toward negative information is widely accepted as a hallmark of depression, aberrant brain hyperactivity to positive distractors was found and normalised after CBT. Research, assessment and treatment focused on positive stimuli could be a future consideration. Moreover, a pathophysiological mechanism distinct from adult depression may be suggested and awaits further exploration. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Does normalisation improve the diagnostic performance of apparent diffusion coefficient values for prostate cancer assessment? A blinded independent-observer evaluation

    International Nuclear Information System (INIS)

    Rosenkrantz, A.B.; Khalef, V.; Xu, W.; Babb, J.S.; Taneja, S.S.; Doshi, A.M.

    2015-01-01

    Aim: To evaluate the performance of normalised apparent diffusion coefficient (ADC) values for prostate cancer assessment when performed by independent observers blinded to histopathology findings. Materials and methods: Fifty-eight patients undergoing 3 T phased-array coil magnetic resonance imaging (MRI) including diffusion-weighted imaging (DWI; maximal b-value 1000 s/mm 2 ) before prostatectomy were included. Two radiologists independently evaluated the images, unaware of the histopathology findings. Regions of interest (ROIs) were drawn within areas showing visually low ADC within the peripheral zone (PZ) and transition zone (TZ) bilaterally. ROIs were also placed within regions in both lobes not suspicious for tumour, allowing computation of normalised ADC (nADC) ratios between suspicious and non-suspicious regions. The diagnostic performance of ADC and nADC were compared. Results: For PZ tumour detection, ADC achieved significantly higher area under the receiver operating characteristic curve (AUC; p=0.026) and specificity (p=0.021) than nADC for reader 1, and significantly higher AUC (p=0.025) than nADC for reader 2. For TZ tumour detection, nADC achieved significantly higher specificity (p=0.003) and accuracy (p=0.004) than ADC for reader 2. For PZ Gleason score >3+3 tumour detection, ADC achieved significantly higher AUC (p=0.003) and specificity (p=0.005) than nADC for reader 1, and significantly higher AUC (p=0.023) than nADC for reader 2. For TZ Gleason score >3+3 tumour detection, ADC achieved significantly higher specificity (p=0.019) than nADC for reader 1. Conclusion: In contrast to prior studies performing unblinded evaluations, ADC was observed to outperform nADC overall for two independent observers blinded to the histopathology findings. Therefore, although strategies to improve the utility of ADC measurements in prostate cancer assessment merit continued investigation, caution is warranted when applying normalisation to improve diagnostic

  2. Using normalisation process theory to understand barriers and facilitators to implementing mindfulness-based stress reduction for people with multiple sclerosis.

    Science.gov (United States)

    Simpson, Robert; Simpson, Sharon; Wood, Karen; Mercer, Stewart W; Mair, Frances S

    2018-01-01

    Objectives To study barriers and facilitators to implementation of mindfulness-based stress reduction for people with multiple sclerosis. Methods Qualitative interviews were used to explore barriers and facilitators to implementation of mindfulness-based stress reduction, including 33 people with multiple sclerosis, 6 multiple sclerosis clinicians and 2 course instructors. Normalisation process theory provided the underpinning conceptual framework. Data were analysed deductively using normalisation process theory constructs (coherence, cognitive participation, collective action and reflexive monitoring). Results Key barriers included mismatched stakeholder expectations, lack of knowledge about mindfulness-based stress reduction, high levels of comorbidity and disability and skepticism about embedding mindfulness-based stress reduction in routine multiple sclerosis care. Facilitators to implementation included introducing a pre-course orientation session; adaptations to mindfulness-based stress reduction to accommodate comorbidity and disability and participants suggested smaller, shorter classes, shortened practices, exclusion of mindful-walking and more time with peers. Post-mindfulness-based stress reduction booster sessions may be required, and objective and subjective reports of benefit would increase clinician confidence in mindfulness-based stress reduction. Discussion Multiple sclerosis patients and clinicians know little about mindfulness-based stress reduction. Mismatched expectations are a barrier to participation, as is rigid application of mindfulness-based stress reduction in the context of disability. Course adaptations in response to patient needs would facilitate uptake and utilisation. Rendering access to mindfulness-based stress reduction rapid and flexible could facilitate implementation. Embedded outcome assessment is desirable.

  3. From theory to 'measurement' in complex interventions: Methodological lessons from the development of an e-health normalisation instrument

    Directory of Open Access Journals (Sweden)

    Finch Tracy L

    2012-05-01

    Full Text Available Abstract Background Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1 describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2 identify key issues and methodological challenges for advancing work in this field. Methods A 30-item instrument (Technology Adoption Readiness Scale (TARS for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT. NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice was used by health care professionals. Results The developed instrument was pre-tested in two professional samples (N = 46; N = 231. Ratings of items representing normalisation ‘processes’ were significantly related to staff members’ perceptions of whether or not e-health had become ‘routine’. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts. Conclusions To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1 greater attention to underlying theoretical assumptions and extent of translation work required; (2 the need for appropriate but flexible approaches to outcomes

  4. From theory to 'measurement' in complex interventions: methodological lessons from the development of an e-health normalisation instrument.

    Science.gov (United States)

    Finch, Tracy L; Mair, Frances S; O'Donnell, Catherine; Murray, Elizabeth; May, Carl R

    2012-05-17

    Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1) describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2) identify key issues and methodological challenges for advancing work in this field. A 30-item instrument (Technology Adoption Readiness Scale (TARS)) for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT). NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice) was used by health care professionals. The developed instrument was pre-tested in two professional samples (N=46; N=231). Ratings of items representing normalisation 'processes' were significantly related to staff members' perceptions of whether or not e-health had become 'routine'. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts. To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1) greater attention to underlying theoretical assumptions and extent of translation work required; (2) the need for appropriate but flexible approaches to outcomes measurement; (3) representation of multiple perspectives and collaborative nature of

  5. The Physical Driver of the Optical Eigenvector 1 in Quasar Main Sequence

    Energy Technology Data Exchange (ETDEWEB)

    Panda, Swayamtrupta; Czerny, Bożena [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw (Poland); Wildy, Conor, E-mail: panda@cft.edu.pl [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland)

    2017-11-07

    Quasars are complex sources, characterized by broad band spectra from radio through optical to X-ray band, with numerous emission and absorption features. This complexity leads to rich diagnostics. However, Boroson and Green (1992) used Principal Component Analysis (PCA), and with this analysis they were able to show significant correlations between the measured parameters. The leading component, related to Eigenvector 1 (EV1) was dominated by the anticorrelation between the FeII optical emission and [OIII] line and EV1 alone contained 30% of the total variance. It opened a way in defining a quasar main sequence, in close analogy to the stellar main sequence on the Hertzsprung-Russel (HR) diagram (Sulentic et al., 2001). The question still remains which of the basic theoretically motivated parameters of an active nucleus (Eddington ratio, black hole mass, accretion rate, spin, and viewing angle) is the main driver behind the EV1. Here we limit ourselves to the optical waveband, and concentrate on theoretical modeling the FeII to Hβ ratio, and we test the hypothesis that the physical driver of EV1 is the maximum of the accretion disk temperature, reflected in the shape of the spectral energy distribution (SED). We performed computations of the Hβ and optical FeII for a broad range of SED peak position using CLOUDY photoionisation code. We assumed that both Hβ and FeII emission come from the Broad Line Region represented as a constant density cloud in a plane-parallel geometry. We expected that a hotter disk continuum will lead to more efficient production of FeII but our computations show that the FeII to Hβ ratio actually drops with the rise of the disk temperature. Thus either hypothesis is incorrect, or approximations used in our paper for the description of the line emissivity is inadequate.

  6. The Physical Driver of the Optical Eigenvector 1 in Quasar Main Sequence

    Directory of Open Access Journals (Sweden)

    Swayamtrupta Panda

    2017-11-01

    Full Text Available Quasars are complex sources, characterized by broad band spectra from radio through optical to X-ray band, with numerous emission and absorption features. This complexity leads to rich diagnostics. However, Boroson and Green (1992 used Principal Component Analysis (PCA, and with this analysis they were able to show significant correlations between the measured parameters. The leading component, related to Eigenvector 1 (EV1 was dominated by the anticorrelation between the FeII optical emission and [OIII] line and EV1 alone contained 30% of the total variance. It opened a way in defining a quasar main sequence, in close analogy to the stellar main sequence on the Hertzsprung-Russel (HR diagram (Sulentic et al., 2001. The question still remains which of the basic theoretically motivated parameters of an active nucleus (Eddington ratio, black hole mass, accretion rate, spin, and viewing angle is the main driver behind the EV1. Here we limit ourselves to the optical waveband, and concentrate on theoretical modeling the FeII to Hβ ratio, and we test the hypothesis that the physical driver of EV1 is the maximum of the accretion disk temperature, reflected in the shape of the spectral energy distribution (SED. We performed computations of the Hβ and optical FeII for a broad range of SED peak position using CLOUDY photoionisation code. We assumed that both Hβ and FeII emission come from the Broad Line Region represented as a constant density cloud in a plane-parallel geometry. We expected that a hotter disk continuum will lead to more efficient production of FeII but our computations show that the FeII to Hβ ratio actually drops with the rise of the disk temperature. Thus either hypothesis is incorrect, or approximations used in our paper for the description of the line emissivity is inadequate.

  7. The normalisation of terror: the response of Israel's stock market to long periods of terrorism.

    Science.gov (United States)

    Peleg, Kobi; Regens, James L; Gunter, James T; Jaffe, Dena H

    2011-01-01

    Man-made disasters such as acts of terrorism may affect a society's resiliency and sensitivity to prolonged physical and psychological stress. The Israeli Tel Aviv stock market TA-100 Index was used as an indicator of reactivity to suicide terror bombings. After accounting for factors such as world market changes and attack severity and intensity, the analysis reveals that although Israel's financial base remained sensitive to each act of terror across the entire period of the Second Intifada (2000-06), sustained psychological resilience was indicated with no apparent overall market shift. In other words, we saw a 'normalisation of terror' following an extended period of continued suicide bombings. The results suggest that investors responded to less transitory global market forces, indicating sustained resilience and long-term market confidence. Future studies directly measuring investor expectations and reactions to man-made disasters, such as terrorism, are warranted. © 2011 The Author(s). Disasters © Overseas Development Institute, 2011.

  8. Eigenvector Spatial Filtering Regression Modeling of Ground PM2.5 Concentrations Using Remotely Sensed Data

    Directory of Open Access Journals (Sweden)

    Jingyi Zhang

    2018-06-01

    Full Text Available This paper proposes a regression model using the Eigenvector Spatial Filtering (ESF method to estimate ground PM2.5 concentrations. Covariates are derived from remotely sensed data including aerosol optical depth, normal differential vegetation index, surface temperature, air pressure, relative humidity, height of planetary boundary layer and digital elevation model. In addition, cultural variables such as factory densities and road densities are also used in the model. With the Yangtze River Delta region as the study area, we constructed ESF-based Regression (ESFR models at different time scales, using data for the period between December 2015 and November 2016. We found that the ESFR models effectively filtered spatial autocorrelation in the OLS residuals and resulted in increases in the goodness-of-fit metrics as well as reductions in residual standard errors and cross-validation errors, compared to the classic OLS models. The annual ESFR model explained 70% of the variability in PM2.5 concentrations, 16.7% more than the non-spatial OLS model. With the ESFR models, we performed detail analyses on the spatial and temporal distributions of PM2.5 concentrations in the study area. The model predictions are lower than ground observations but match the general trend. The experiment shows that ESFR provides a promising approach to PM2.5 analysis and prediction.

  9. Eigenvector Spatial Filtering Regression Modeling of Ground PM2.5 Concentrations Using Remotely Sensed Data.

    Science.gov (United States)

    Zhang, Jingyi; Li, Bin; Chen, Yumin; Chen, Meijie; Fang, Tao; Liu, Yongfeng

    2018-06-11

    This paper proposes a regression model using the Eigenvector Spatial Filtering (ESF) method to estimate ground PM 2.5 concentrations. Covariates are derived from remotely sensed data including aerosol optical depth, normal differential vegetation index, surface temperature, air pressure, relative humidity, height of planetary boundary layer and digital elevation model. In addition, cultural variables such as factory densities and road densities are also used in the model. With the Yangtze River Delta region as the study area, we constructed ESF-based Regression (ESFR) models at different time scales, using data for the period between December 2015 and November 2016. We found that the ESFR models effectively filtered spatial autocorrelation in the OLS residuals and resulted in increases in the goodness-of-fit metrics as well as reductions in residual standard errors and cross-validation errors, compared to the classic OLS models. The annual ESFR model explained 70% of the variability in PM 2.5 concentrations, 16.7% more than the non-spatial OLS model. With the ESFR models, we performed detail analyses on the spatial and temporal distributions of PM 2.5 concentrations in the study area. The model predictions are lower than ground observations but match the general trend. The experiment shows that ESFR provides a promising approach to PM 2.5 analysis and prediction.

  10. Implementation of the SMART MOVE intervention in primary care: a qualitative study using normalisation process theory.

    Science.gov (United States)

    Glynn, Liam G; Glynn, Fergus; Casey, Monica; Wilkinson, Louise Gaffney; Hayes, Patrick S; Heaney, David; Murphy, Andrew W M

    2018-05-02

    Problematic translational gaps continue to exist between demonstrating the positive impact of healthcare interventions in research settings and their implementation into routine daily practice. The aim of this qualitative evaluation of the SMART MOVE trial was to conduct a theoretically informed analysis, using normalisation process theory, of the potential barriers and levers to the implementation of a mhealth intervention to promote physical activity in primary care. The study took place in the West of Ireland with recruitment in the community from the Clare Primary Care Network. SMART MOVE trial participants and the staff from four primary care centres were invited to take part and all agreed to do so. A qualitative methodology with a combination of focus groups (general practitioners, practice nurses and non-clinical staff from four separate primary care centres, n = 14) and individual semi-structured interviews (intervention and control SMART MOVE trial participants, n = 4) with purposeful sampling utilising the principles of Framework Analysis was utilised. The Normalisation Process Theory was used to develop the topic guide for the interviews and also informed the data analysis process. Four themes emerged from the analysis: personal and professional exercise strategies; roles and responsibilities to support active engagement; utilisation challenges; and evaluation, adoption and adherence. It was evident that introducing a new healthcare intervention demands a comprehensive evaluation of the intervention itself and also the environment in which it is to operate. Despite certain obstacles, the opportunity exists for the successful implementation of a novel healthcare intervention that addresses a hitherto unresolved healthcare need, provided that the intervention has strong usability attributes for both disseminators and target users and coheres strongly with the core objectives and culture of the health care environment in which it is to operate. We

  11. Clinical, immunological and treatment-related factors associated with normalised CD4+/CD8+ T-cell ratio: effect of naïve and memory T-cell subsets.

    LENUS (Irish Health Repository)

    Tinago, Willard

    2014-01-01

    Although effective antiretroviral therapy(ART) increases CD4+ T-cell count, responses to ART vary considerably and only a minority of patients normalise their CD4+\\/CD8+ ratio. Although retention of naïve CD4+ T-cells is thought to predict better immune responses, relationships between CD4+ and CD8+ T-cell subsets and CD4+\\/CD8+ ratio have not been well described.

  12. An application of Extended Normalisation Process Theory in a randomised controlled trial of a complex social intervention: Process evaluation of the Strengthening Families Programme (10–14 in Wales, UK

    Directory of Open Access Journals (Sweden)

    Jeremy Segrott

    2017-12-01

    Conclusions: Extended Normalisation Process Theory provided a useful framework for assessing implementation and explaining variation by examining intervention-context interactions. Findings highlight the need for process evaluations to consider both the structural and process components of implementation to explain whether programme activities are delivered as intended and why.

  13. Identifying Stable Reference Genes for qRT-PCR Normalisation in Gene Expression Studies of Narrow-Leafed Lupin (Lupinus angustifolius L..

    Directory of Open Access Journals (Sweden)

    Candy M Taylor

    Full Text Available Quantitative Reverse Transcription PCR (qRT-PCR is currently one of the most popular, high-throughput and sensitive technologies available for quantifying gene expression. Its accurate application depends heavily upon normalisation of gene-of-interest data with reference genes that are uniformly expressed under experimental conditions. The aim of this study was to provide the first validation of reference genes for Lupinus angustifolius (narrow-leafed lupin, a significant grain legume crop using a selection of seven genes previously trialed as reference genes for the model legume, Medicago truncatula. In a preliminary evaluation, the seven candidate reference genes were assessed on the basis of primer specificity for their respective targeted region, PCR amplification efficiency, and ability to discriminate between cDNA and gDNA. Following this assessment, expression of the three most promising candidates [Ubiquitin C (UBC, Helicase (HEL, and Polypyrimidine tract-binding protein (PTB] was evaluated using the NormFinder and RefFinder statistical algorithms in two narrow-leafed lupin lines, both with and without vernalisation treatment, and across seven organ types (cotyledons, stem, leaves, shoot apical meristem, flowers, pods and roots encompassing three developmental stages. UBC was consistently identified as the most stable candidate and has sufficiently uniform expression that it may be used as a sole reference gene under the experimental conditions tested here. However, as organ type and developmental stage were associated with greater variability in relative expression, it is recommended using UBC and HEL as a pair to achieve optimal normalisation. These results highlight the importance of rigorously assessing candidate reference genes for each species across a diverse range of organs and developmental stages. With emerging technologies, such as RNAseq, and the completion of valuable transcriptome data sets, it is possible that other

  14. The economic costs of natural disasters globally from 1900-2015: historical and normalised floods, storms, earthquakes, volcanoes, bushfires, drought and other disasters

    Science.gov (United States)

    Daniell, James; Wenzel, Friedemann; Schaefer, Andreas

    2016-04-01

    For the first time, a breakdown of natural disaster losses from 1900-2015 based on over 30,000 event economic losses globally is given based on increased analysis within the CATDAT Damaging Natural Disaster databases. Using country-CPI and GDP deflator adjustments, over 7 trillion (2015-adjusted) in losses have occurred; over 40% due to flood/rainfall, 26% due to earthquake, 19% due to storm effects, 12% due to drought, 2% due to wildfire and under 1% due to volcano. Using construction cost indices, higher percentages of flood losses are seen. Depending on how the adjustment of dollars are made to 2015 terms (CPI vs. construction cost indices), between 6.5 and 14.0 trillion USD (2015-adjusted) of natural disaster losses have been seen from 1900-2015 globally. Significant reductions in economic losses have been seen in China and Japan from 1950 onwards. An AAL of around 200 billion in the last 16 years has been seen equating to around 0.25% of Global GDP or around 0.1% of Net Capital Stock per year. Normalised losses have also been calculated to examine the trends in vulnerability through time for economic losses. The normalisation methodology globally using the exposure databases within CATDAT that were undertaken previously in papers for the earthquake and volcano databases, are used for this study. The original event year losses are adjusted directly by capital stock change, very high losses are observed with respect to floods over time (however with improved flood control structures). This shows clear trends in the improvement of building stock towards natural disasters and a decreasing trend in most perils for most countries.

  15. Attention training normalises combat-related post-traumatic stress disorder effects on emotional Stroop performance using lexically matched word lists.

    Science.gov (United States)

    Khanna, Maya M; Badura-Brack, Amy S; McDermott, Timothy J; Shepherd, Alex; Heinrichs-Graham, Elizabeth; Pine, Daniel S; Bar-Haim, Yair; Wilson, Tony W

    2015-08-26

    We examined two groups of combat veterans, one with post-traumatic stress disorder (PTSD) (n = 27) and another without PTSD (n = 16), using an emotional Stroop task (EST) with word lists matched across a series of lexical variables (e.g. length, frequency, neighbourhood size, etc.). Participants with PTSD exhibited a strong EST effect (longer colour-naming latencies for combat-relevant words as compared to neutral words). Veterans without PTSD produced no such effect, t  .37. Participants with PTSD then completed eight sessions of attention training (Attention Control Training or Attention Bias Modification Training) with a dot-probe task utilising threatening and neutral faces. After training, participants-especially those undergoing Attention Control Training-no longer produced longer colour-naming latencies for combat-related words as compared to other words, indicating normalised attention allocation processes after treatment.

  16. Technical Note: On methodologies for determining the size-normalised weight of planktic foraminifera

    Directory of Open Access Journals (Sweden)

    C. J. Beer

    2010-07-01

    Full Text Available The size-normalised weight (SNW of planktic foraminifera, a measure of test wall thickness and density, is potentially a valuable palaeo-proxy for marine carbon chemistry. As increasing attention is given to developing this proxy it is important that methods are comparable between studies. Here, we compare SNW data generated using two different methods to account for variability in test size, namely (i the narrow (50 μm range sieve fraction method and (ii the individually measured test size method. Using specimens from the 200–250 μm sieve fraction range collected in multinet samples from the North Atlantic, we find that sieving does not constrain size sufficiently well to isolate changes in weight driven by variations in test wall thickness and density from those driven by size. We estimate that the SNW data produced as part of this study are associated with an uncertainty, or error bar, of about ±11%. Errors associated with the narrow sieve fraction method may be reduced by decreasing the size of the sieve window, by using larger tests and by increasing the number tests employed. In situations where numerous large tests are unavailable, however, substantial errors associated with this sieve method remain unavoidable. In such circumstances the individually measured test size method provides a better means for estimating SNW because, as our results show, this method isolates changes in weight driven by variations in test wall thickness and density from those driven by size.

  17. The applicability of normalisation process theory to speech and language therapy: a review of qualitative research on a speech and language intervention.

    Science.gov (United States)

    James, Deborah M

    2011-08-12

    The Bercow review found a high level of public dissatisfaction with speech and language services for children. Children with speech, language, and communication needs (SLCN) often have chronic complex conditions that require provision from health, education, and community services. Speech and language therapists are a small group of Allied Health Professionals with a specialist skill-set that equips them to work with children with SLCN. They work within and across the diverse range of public service providers. The aim of this review was to explore the applicability of Normalisation Process Theory (NPT) to the case of speech and language therapy. A review of qualitative research on a successfully embedded speech and language therapy intervention was undertaken to test the applicability of NPT. The review focused on two of the collective action elements of NPT (relational integration and interaction workability) using all previously published qualitative data from both parents and practitioners' perspectives on the intervention. The synthesis of the data based on the Normalisation Process Model (NPM) uncovered strengths in the interpersonal processes between the practitioners and parents, and weaknesses in how the accountability of the intervention is distributed in the health system. The analysis based on the NPM uncovered interpersonal processes between the practitioners and parents that were likely to have given rise to successful implementation of the intervention. In previous qualitative research on this intervention where the Medical Research Council's guidance on developing a design for a complex intervention had been used as a framework, the interpersonal work within the intervention had emerged as a barrier to implementation of the intervention. It is suggested that the design of services for children and families needs to extend beyond the consideration of benefits and barriers to embrace the social processes that appear to afford success in embedding

  18. Using Normalisation Process Theory to investigate the implementation of school-based oral health promotion.

    Science.gov (United States)

    Olajide, O J; Shucksmith, J; Maguire, A; Zohoori, F V

    2017-09-01

    Despite the considerable improvement in oral health of children in the UK over the last forty years, a significant burden of dental caries remains prevalent in some groups of children, indicating the need for more effective oral health promotion intervention (OHPI) strategies in this population. To explore the implementation process of a community-based OHPI, in the North East of England, using Normalisation Process Theory (NPT) to provide insights on how effectiveness could be maximised. Utilising a generic qualitative research approach, 19 participants were recruited into the study. In-depth interviews were conducted with relevant National Health Service (NHS) staff and primary school teachers while focus group discussions were conducted with reception teachers and teaching assistants. Analyses were conducted using thematic analysis with emergent themes mapped onto NPT constructs. Participants highlighted the benefits of OHPI and the need for evidence in practice. However, implementation of 'best evidence' was hampered by lack of adequate synthesis of evidence from available clinical studies on effectiveness of OHPI as these generally have insufficient information on the dynamics of implementation and how effectiveness obtained in clinical studies could be achieved in 'real life'. This impacted on the decision-making process, levels of commitment, collaboration among OHP teams, resource allocation and evaluation of OHPI. A large gap exists between available research evidence and translation of evidence in OHPI in community settings. Effectiveness of OHPI requires not only an awareness of evidence of clinical effectiveness but also synthesised information about change mechanisms and implementation protocols. Copyright© 2017 Dennis Barber Ltd.

  19. The implementation of medical revalidation: an assessment using normalisation process theory

    Directory of Open Access Journals (Sweden)

    Abigail Tazzyman

    2017-11-01

    Full Text Available Abstract Background Medical revalidation is the process by which all licensed doctors are legally required to demonstrate that they are up to date and fit to practise in order to maintain their licence. Revalidation was introduced in the United Kingdom (UK in 2012, constituting significant change in the regulation of doctors. The governing body, the General Medical Council (GMC, envisages that revalidation will improve patient care and safety. This potential however is, in part, dependent upon how successfully revalidation is embedded into routine practice. The aim of this study was to use Normalisation Process Theory (NPT to explore issues contributing to or impeding the implementation of revalidation in practice. Methods We conducted seventy-one interviews with sixty UK policymakers and senior leaders at different points during the development and implementation of revalidation: in 2011 (n = 31, 2013 (n = 26 and 2015 (n = 14. We selected interviewees using purposeful sampling. NPT was used as a framework to enable systematic analysis across the interview sets. Results Initial lack of consensus over revalidation’s purpose, and scepticism about its value, decreased over time as participants recognised the benefits it brought to their practice (coherence category of NPT. Though acceptance increased across time, revalidation was not seen as a legitimate part of their role by all doctors. Key individuals, notably the Responsible Officer (RO, were vital for the successful implementation of revalidation in organisations (cognitive participation category. The ease with which revalidation could be integrated into working practices varied greatly depending on the type of role a doctor held and the organisation they work for and the provision of resources was a significant variable in this (collective action category. Formal evaluation of revalidation in organisations was lacking but informal evaluation was taking place. Revalidation had

  20. Long-term performance of grid-connected photovoltaic plant - Appendix 2: normalised monthly statistics; Langzeitverhalten von netzgekoppelten Photovoltaikanlagen 2 (LZPV2). Anhang 2: Normierte Monatsstatistiken

    Energy Technology Data Exchange (ETDEWEB)

    Renken, C.; Haeberlin, H.

    2003-07-01

    This is the third part of a four-part final report for the Swiss Federal Office of Energy (SFOE) made by the University of Applied Sciences in Burgdorf, Switzerland. This report presents the findings of a project begun in 1992 that monitored the performance of around 40 photovoltaic (PV) installations in Switzerland. This extensive second appendix to the report describes the eight installations that were monitored in detail, including - amongst others - the demonstration installations on Mont Soleil in the Jura mountains and on the Jungfraujoch in the Alps as well as three test installations using modern thin-film technologies in Burgdorf. The normalised monthly specific performance of these installations was monitored. The report presents the various performance figures in graphical form.

  1. Diabetic ketoacidosis in adult patients: an audit of factors influencing time to normalisation of metabolic parameters.

    Science.gov (United States)

    Lee, Melissa H; Calder, Genevieve L; Santamaria, John D; MacIsaac, Richard J

    2018-05-01

    Diabetic ketoacidosis (DKA) is an acute life-threatening metabolic complication of diabetes that imposes substantial burden on our healthcare system. There is a paucity of published data in Australia assessing factors influencing time to resolution of DKA and length of stay (LOS). To identify factors that predict a slower time to resolution of DKA in adults with diabetes. Retrospective audit of patients admitted to St Vincent's Hospital Melbourne between 2010 to 2014 coded with a diagnosis of 'Diabetic Ketoacidosis'. The primary outcome was time to resolution of DKA based on normalisation of biochemical markers. Episodes of DKA within the wider Victorian hospital network were also explored. Seventy-one patients met biochemical criteria for DKA; median age 31 years (26-45 years), 59% were male and 23% had newly diagnosed diabetes. Insulin omission was the most common precipitant (42%). Median time to resolution of DKA was 11 h (6.5-16.5 h). Individual factors associated with slower resolution of DKA were lower admission pH (P < 0.001) and higher admission serum potassium level (P = 0.03). Median LOS was 3 days (2-5 days), compared to a Victorian state-wide LOS of 2 days. Higher comorbidity scores were associated with longer LOS (P < 0.001). Lower admission pH levels and higher admission serum potassium levels are independent predictors of slower time to resolution of DKA. This may assist to stratify patients with DKA using markers of severity to determine who may benefit from closer monitoring and to predict LOS. © 2018 Royal Australasian College of Physicians.

  2. Normalisation of cerebrospinal fluid biomarkers parallels improvement of neurological symptoms following HAART in HIV dementia – case report

    Directory of Open Access Journals (Sweden)

    Blennow Kaj

    2006-09-01

    Full Text Available Abstract Background Since the introduction of HAART the incidence of HIV dementia has declined and HAART seems to improve neurocognitive function in patients with HIV dementia. Currently, HIV dementia develops mainly in patients without effective treatment, though it has also been described in patients on HAART and milder HIV-associated neuropsychological impairment is still frequent among HIV-1 infected patients regardless of HAART. Elevated cerebrospinal fluid (CSF levels of markers of neural injury and immune activation have been found in HIV dementia, but neither of those, nor CSF HIV-1 RNA levels have been proven useful as diagnostic or prognostic pseudomarkers in HIV dementia. Case presentation We report a case of HIV dementia (MSK stage 3 in a 57 year old antiretroviral naïve man who was introduced on zidovudine, lamivudine and ritonavir boosted indinavir, and followed with consecutive lumbar punctures before and after two and 15 months after initiation of HAART. Improvement of neurocognitive function was paralleled by normalisation of CSF neural markers (NFL, Tau and GFAP levels and a decline in CSF and serum neopterin and CSF and plasma HIV-1 RNA levels. Conclusion The value of these CSF markers as prognostic pseudomarkers of the effect of HAART on neurocognitive impairment in HIV dementia ought to be evaluated in longitudinal studies.

  3. An application of Extended Normalisation Process Theory in a randomised controlled trial of a complex social intervention: Process evaluation of the Strengthening Families Programme (10-14) in Wales, UK.

    Science.gov (United States)

    Segrott, Jeremy; Murphy, Simon; Rothwell, Heather; Scourfield, Jonathan; Foxcroft, David; Gillespie, David; Holliday, Jo; Hood, Kerenza; Hurlow, Claire; Morgan-Trimmer, Sarah; Phillips, Ceri; Reed, Hayley; Roberts, Zoe; Moore, Laurence

    2017-12-01

    Process evaluations generate important data on the extent to which interventions are delivered as intended. However, the tendency to focus only on assessment of pre-specified structural aspects of fidelity has been criticised for paying insufficient attention to implementation processes and how intervention-context interactions influence programme delivery. This paper reports findings from a process evaluation nested within a randomised controlled trial of the Strengthening Families Programme 10-14 (SFP 10-14) in Wales, UK. It uses Extended Normalisation Process Theory to theorise how interaction between SFP 10-14 and local delivery systems - particularly practitioner commitment/capability and organisational capacity - influenced delivery of intended programme activities: fidelity (adherence to SFP 10-14 content and implementation requirements); dose delivered; dose received (participant engagement); participant recruitment and reach (intervention attendance). A mixed methods design was utilised. Fidelity assessment sheets (completed by practitioners), structured observation by researchers, and routine data were used to assess: adherence to programme content; staffing numbers and consistency; recruitment/retention; and group size and composition. Interviews with practitioners explored implementation processes and context. Adherence to programme content was high - with some variation, linked to practitioner commitment to, and understanding of, the intervention's content and mechanisms. Variation in adherence rates was associated with the extent to which multi-agency delivery team planning meetings were held. Recruitment challenges meant that targets for group size/composition were not always met, but did not affect adherence levels or family engagement. Targets for staffing numbers and consistency were achieved, though capacity within multi-agency networks reduced over time. Extended Normalisation Process Theory provided a useful framework for assessing

  4. Trends of air pollution in Denmark - Normalised by a simple weather index model

    International Nuclear Information System (INIS)

    Kiilsholm, S.; Rasmussen, A.

    2000-01-01

    This report is a part of the Traffic Pool projects on 'Traffic and Environments', 1995-99, financed by the Danish Ministry of Transport. The Traffic Pool projects included five different projects on 'Surveillance of the Air Quality', 'Atmospheric Modelling', 'Atmospheric Chemistry Modelling', 'Smog and ozone' and 'Greenhouse effects and Climate', [Rasmussen, 2000]. This work is a part of the project on 'Surveillance of the Air Quality' with the main objectives to make trend analysis of levels of air pollution from traffic in Denmark. Other participants were from the Road Directory mainly focusing on measurement of traffic and trend analysis of the air quality utilising a nordic model for the air pollution in street canyons called BLB (Beregningsmodel for Luftkvalitet i Byluftgader) [Vejdirektoratet 2000], National Environmental Research Institute (HERI) mainly focusing on. measurements of air pollution and trend analysis with the Operational Street Pollution Model (OSPM) [DMU 2000], and the Copenhagen Environmental Protection Agency mainly focusing on measurements. In this study a more simple statistical model has been developed for trend analysis of the air quality. The model is filtering out the influence of the variations from year to year in the meteorological conditions on the air pollution levels. The weather factors found most important are wind speed, wind direction and mixing height. Measurements of CO, NO and NO 2 from three streets in Copenhagen have been used, these streets are Jagtvej, Bredgade and H. C. Andersen's Boulevard (HCAB). The years 1994-1996 were used for evaluation of the method and annual indexes of air pollution index dependent only on meteorological parameters, called WEATHIX, were calculated for the years 1990-1997 and used for normalisation of the observed air pollution trends. Meteorological data were taken from either the background stations at the H.C. Oersted - building situated close to one of the street stations or the synoptic

  5. Global Earthquake and Volcanic Eruption Economic losses and costs from 1900-2014: 115 years of the CATDAT database - Trends, Normalisation and Visualisation

    Science.gov (United States)

    Daniell, James; Skapski, Jens-Udo; Vervaeck, Armand; Wenzel, Friedemann; Schaefer, Andreas

    2015-04-01

    Over the past 12 years, an in-depth database has been constructed for socio-economic losses from earthquakes and volcanoes. The effects of earthquakes and volcanic eruptions have been documented in many databases, however, many errors and incorrect details are often encountered. To combat this, the database was formed with socioeconomic checks of GDP, capital stock, population and other elements, as well as providing upper and lower bounds to each available event loss. The definition of economic losses within the CATDAT Damaging Earthquakes Database (Daniell et al., 2011a) as of v6.1 has now been redefined to provide three options of natural disaster loss pricing, including reconstruction cost, replacement cost and actual loss, in order to better define the impact of historical disasters. Similarly for volcanoes as for earthquakes, a reassessment has been undertaken looking at the historical net and gross capital stock and GDP at the time of the event, including the depreciated stock, in order to calculate the actual loss. A normalisation has then been undertaken using updated population, GDP and capital stock. The difference between depreciated and gross capital can be removed from the historical loss estimates which have been all calculated without taking depreciation of the building stock into account. The culmination of time series from 1900-2014 of net and gross capital stock, GDP, direct economic loss data, use of detailed studies of infrastructure age, and existing damage surveys, has allowed the first estimate of this nature. The death tolls in earthquakes from 1900-2014 are presented in various forms, showing around 2.32 million deaths due to earthquakes (with a range of 2.18 to 2.63 million) and around 59% due to masonry buildings and 28% from secondary effects. For the death tolls from the volcanic eruption database, 98000 deaths with a range from around 83000 to 107000 is seen from 1900-2014. The application of VSL life costing from death and injury

  6. Algebraic Bethe ansatz for the quantum group invariant open XXZ chain at roots of unity

    Directory of Open Access Journals (Sweden)

    Azat M. Gainutdinov

    2016-08-01

    Full Text Available For generic values of q, all the eigenvectors of the transfer matrix of the Uqsl(2-invariant open spin-1/2 XXZ chain with finite length N can be constructed using the algebraic Bethe ansatz (ABA formalism of Sklyanin. However, when q is a root of unity (q=eiπ/p with integer p≥2, the Bethe equations acquire continuous solutions, and the transfer matrix develops Jordan cells. Hence, there appear eigenvectors of two new types: eigenvectors corresponding to continuous solutions (exact complete p-strings, and generalized eigenvectors. We propose general ABA constructions for these two new types of eigenvectors. We present many explicit examples, and we construct complete sets of (generalized eigenvectors for various values of p and N.

  7. Bariatric surgery in morbidly obese insulin resistant humans normalises insulin signalling but not insulin-stimulated glucose disposal.

    Directory of Open Access Journals (Sweden)

    Mimi Z Chen

    Full Text Available Weight-loss after bariatric surgery improves insulin sensitivity, but the underlying molecular mechanism is not clear. To ascertain the effect of bariatric surgery on insulin signalling, we examined glucose disposal and Akt activation in morbidly obese volunteers before and after Roux-en-Y gastric bypass surgery (RYGB, and compared this to lean volunteers.The hyperinsulinaemic euglycaemic clamp, at five infusion rates, was used to determine glucose disposal rates (GDR in eight morbidly obese (body mass index, BMI=47.3 ± 2.2 kg/m(2 patients, before and after RYGB, and in eight lean volunteers (BMI=20.7 ± 0.7 kg/m2. Biopsies of brachioradialis muscle, taken at fasting and insulin concentrations that induced half-maximal (GDR50 and maximal (GDR100 GDR in each subject, were used to examine the phosphorylation of Akt-Thr308, Akt-473, and pras40, in vivo biomarkers for Akt activity.Pre-operatively, insulin-stimulated GDR was lower in the obese compared to the lean individuals (P<0.001. Weight-loss of 29.9 ± 4 kg after surgery significantly improved GDR50 (P=0.004 but not GDR100 (P=0.3. These subjects still remained significantly more insulin resistant than the lean individuals (p<0.001. Weight loss increased insulin-stimulated skeletal muscle Akt-Thr308 and Akt-Ser473 phosphorylation, P=0.02 and P=0.03 respectively (MANCOVA, and Akt activity towards the substrate PRAS40 (P=0.003, MANCOVA, and in contrast to GDR, were fully normalised after the surgery (obese vs lean, P=0.6, P=0.35, P=0.46, respectively.Our data show that although Akt activity substantially improved after surgery, it did not lead to a full restoration of insulin-stimulated glucose disposal. This suggests that a major defect downstream of, or parallel to, Akt signalling remains after significant weight-loss.

  8. Analysis of a normalised expressed sequence tag (EST) library from a key pollinator, the bumblebee Bombus terrestris.

    Science.gov (United States)

    Sadd, Ben M; Kube, Michael; Klages, Sven; Reinhardt, Richard; Schmid-Hempel, Paul

    2010-02-15

    The bumblebee, Bombus terrestris (Order Hymenoptera), is of widespread importance. This species is extensively used for commercial pollination in Europe, and along with other Bombus spp. is a key member of natural pollinator assemblages. Furthermore, the species is studied in a wide variety of biological fields. The objective of this project was to create a B. terrestris EST resource that will prove to be valuable in obtaining a deeper understanding of this significant social insect. A normalised cDNA library was constructed from the thorax and abdomen of B. terrestris workers in order to enhance the discovery of rare genes. A total of 29'428 ESTs were sequenced. Subsequent clustering resulted in 13'333 unique sequences. Of these, 58.8 percent had significant similarities to known proteins, with 54.5 percent having a "best-hit" to existing Hymenoptera sequences. Comparisons with the honeybee and other insects allowed the identification of potential candidates for gene loss, pseudogene evolution, and possible incomplete annotation in the honeybee genome. Further, given the focus of much basic research and the perceived threat of disease to natural and commercial populations, the immune system of bumblebees is a particularly relevant component. Although the library is derived from unchallenged bees, we still uncover transcription of a number of immune genes spanning the principally described insect immune pathways. Additionally, the EST library provides a resource for the discovery of genetic markers that can be used in population level studies. Indeed, initial screens identified 589 simple sequence repeats and 854 potential single nucleotide polymorphisms. The resource that these B. terrestris ESTs represent is valuable for ongoing work. The ESTs provide direct evidence of transcriptionally active regions, but they will also facilitate further functional genomics, gene discovery and future genome annotation. These are important aspects in obtaining a greater

  9. The dynamics of the oesophageal squamous epithelium 'normalisation' process in patients with gastro-oesophageal reflux disease treated with long-term acid suppression or anti-reflux surgery.

    Science.gov (United States)

    Mastracci, L; Fiocca, R; Engström, C; Attwood, S; Ell, C; Galmiche, J P; Hatlebakk, J G; Långström, G; Eklund, S; Lind, T; Lundell, L

    2017-05-01

    Proton pump inhibitors and laparoscopic anti-reflux surgery (LARS) offer long-term symptom control to patients with gastro-oesophageal reflux disease (GERD). To evaluate the process of 'normalisation' of the squamous epithelium morphology of the distal oesophagus on these therapies. In the LOTUS trial, 554 patients with chronic GERD were randomised to receive either esomeprazole (20-40 mg daily) or LARS. After 5 years, 372 patients remained in the study (esomeprazole, 192; LARS, 180). Biopsies were taken at the Z-line and 2 cm above, at baseline, 1, 3 and 5 years. A severity score was calculated based on: papillae elongation, basal cell hyperplasia, intercellular space dilatations and eosinophilic infiltration. The epithelial proliferative activity was assessed by Ki-67 immunohistochemistry. A gradual improvement in all variables over 5 years was noted in both groups, at both the Z-line and 2 cm above. The severity score decreased from baseline at each subsequent time point in both groups (P refluxate seems to play the predominant role in restoring tissue morphology. © 2017 John Wiley & Sons Ltd.

  10. Encadrement des produits et des procédés : réglementation et normalisation du commerce international

    Directory of Open Access Journals (Sweden)

    Morin Odile

    2003-07-01

    Full Text Available Produits et procédés sont encadrés à la fois par des réglementations et, à un autre niveau, par des normes du commerce international. Cette présentation traite des textes réglementaires au niveau communautaire et national. On rappellera que l’entrée en vigueur d’un règlement européen est suivie d’une transposition dans le droit de chaque pays membre et que la réglementation nationale s’applique en l’absence de dispositions communautaires. En matière de commerce international, seront évoquées les actions de normalisation du Conseil Oléicole International (COI pour les huiles d’olive et de grignons d’olive et celles du Codex Alimentarius pour les huiles et graisses comestibles. L’ensemble des dispositions réglementaires constitue un cadre englobant les productions de l’amont vers l’aval, à la fois sur un plan vertical (oléagineux, huiles et corps gras, huiles d’olive, margarines, procédés de raffinage et transversalement (composés organiques volatils, OGM, solvants d’extraction, additifs, contaminants…. Le cas de l’huile d’olive est particulier en ce qu’il bénéficie d’un encadrement au niveau international (normes commerciales COI et Codex Alimentarius, européen et national (réglementation. Le Codex Alimentarius, quant à lui, établit des normes à caractère vertical (huiles végétales, graisses animales, huiles d’olive, matières grasses tartinables… et horizontal (additifs, résidus de pesticides…. L’essentiel de cet encadrement est résumé dans les tableaux qui illustrent cette contribution.

  11. Night-time restricted feeding normalises clock genes and Pai-1 gene expression in the db/db mouse liver.

    Science.gov (United States)

    Kudo, T; Akiyama, M; Kuriyama, K; Sudo, M; Moriya, T; Shibata, S

    2004-08-01

    An increase in PAI-1 activity is thought to be a key factor underlying myocardial infarction. Mouse Pai-1 (mPai-1) activity shows a daily rhythm in vivo, and its transcription seems to be controlled not only by clock genes but also by humoral factors such as insulin and triglycerides. Thus, we investigated daily clock genes and mPai-1 mRNA expression in the liver of db/db mice exhibiting high levels of glucose, insulin and triglycerides. Locomotor activity was measured using an infrared detection system. RT-PCR or in situ hybridisation methods were applied to measure gene expression. Humoral factors were measured using measurement kits. The db/ db mice showed attenuated locomotor activity rhythms. The rhythmic expression of mPer2 mRNA was severely diminished and the phase of mBmal1 oscillation was advanced in the db/db mouse liver, whereas mPai-1 mRNA was highly and constitutively expressed. Night-time restricted feeding led to a recovery not only from the diminished locomotor activity, but also from the diminished Per2 and advanced mBmal1 mRNA rhythms. Expression of mPai-1 mRNA in db/db mice was reduced to levels far below normal. Pioglitazone treatment slightly normalised glucose and insulin levels, with a slight reduction in mPai-1 gene expression. We demonstrated that Type 2 diabetes impairs the oscillation of the peripheral oscillator. Night-time restricted feeding rather than pioglitazone injection led to a recovery from the diminished locomotor activity, and altered oscillation of the peripheral clock and mPai-1 mRNA rhythm. Thus, we conclude that scheduled restricted food intake may be a useful form of treatment for diabetes.

  12. An object recognition method based on fuzzy theory and BP networks

    Science.gov (United States)

    Wu, Chuan; Zhu, Ming; Yang, Dong

    2006-01-01

    It is difficult to choose eigenvectors when neural network recognizes object. It is possible that the different object eigenvectors is similar or the same object eigenvectors is different under scaling, shifting, rotation if eigenvectors can not be chosen appropriately. In order to solve this problem, the image is edged, the membership function is reconstructed and a new threshold segmentation method based on fuzzy theory is proposed to get the binary image. Moment invariant of binary image is extracted and normalized. Some time moment invariant is too small to calculate effectively so logarithm of moment invariant is taken as input eigenvectors of BP network. The experimental results demonstrate that the proposed approach could recognize the object effectively, correctly and quickly.

  13. Learning Eigenvectors for Free

    NARCIS (Netherlands)

    W.M. Koolen-Wijkstra (Wouter); W.T. Kotlowski (Wojciech); M.K. Warmuth

    2011-01-01

    htmlabstractWe extend the classical problem of predicting a sequence of outcomes from a finite alphabet to the matrix domain. In this extension, the alphabet of n outcomes is replaced by the set of all dyads, i.e. outer products uu^T where u is a vector in R^n of unit length. Whereas in the

  14. Facilitating professional liaison in collaborative care for depression in UK primary care; a qualitative study utilising normalisation process theory.

    Science.gov (United States)

    Coupe, Nia; Anderson, Emma; Gask, Linda; Sykes, Paul; Richards, David A; Chew-Graham, Carolyn

    2014-05-01

    Collaborative care (CC) is an organisational framework which facilitates the delivery of a mental health intervention to patients by case managers in collaboration with more senior health professionals (supervisors and GPs), and is effective for the management of depression in primary care. However, there remains limited evidence on how to successfully implement this collaborative approach in UK primary care. This study aimed to explore to what extent CC impacts on professional working relationships, and if CC for depression could be implemented as routine in the primary care setting. This qualitative study explored perspectives of the 6 case managers (CMs), 5 supervisors (trial research team members) and 15 general practitioners (GPs) from practices participating in a randomised controlled trial of CC for depression. Interviews were transcribed verbatim and data was analysed using a two-step approach using an initial thematic analysis, and a secondary analysis using the Normalisation Process Theory concepts of coherence, cognitive participation, collective action and reflexive monitoring with respect to the implementation of CC in primary care. Supervisors and CMs demonstrated coherence in their understanding of CC, and consequently reported good levels of cognitive participation and collective action regarding delivering and supervising the intervention. GPs interviewed showed limited understanding of the CC framework, and reported limited collaboration with CMs: barriers to collaboration were identified. All participants identified the potential or experienced benefits of a collaborative approach to depression management and were able to discuss ways in which collaboration can be facilitated. Primary care professionals in this study valued the potential for collaboration, but GPs' understanding of CC and organisational barriers hindered opportunities for communication. Further work is needed to address these organisational barriers in order to facilitate

  15. Assessing the facilitators and barriers of interdisciplinary team working in primary care using normalisation process theory: An integrative review.

    Science.gov (United States)

    O'Reilly, Pauline; Lee, Siew Hwa; O'Sullivan, Madeleine; Cullen, Walter; Kennedy, Catriona; MacFarlane, Anne

    2017-01-01

    Interdisciplinary team working is of paramount importance in the reform of primary care in order to provide cost-effective and comprehensive care. However, international research shows that it is not routine practice in many healthcare jurisdictions. It is imperative to understand levers and barriers to the implementation process. This review examines interdisciplinary team working in practice, in primary care, from the perspective of service providers and analyses 1 barriers and facilitators to implementation of interdisciplinary teams in primary care and 2 the main research gaps. An integrative review following the PRISMA guidelines was conducted. Following a search of 10 international databases, 8,827 titles were screened for relevance and 49 met the criteria. Quality of evidence was appraised using predetermined criteria. Data were analysed following the principles of framework analysis using Normalisation Process Theory (NPT), which has four constructs: sense making, enrolment, enactment, and appraisal. The literature is dominated by a focus on interdisciplinary working between physicians and nurses. There is a dearth of evidence about all NPT constructs apart from enactment. Physicians play a key role in encouraging the enrolment of others in primary care team working and in enabling effective divisions of labour in the team. The experience of interdisciplinary working emerged as a lever for its implementation, particularly where communication and respect were strong between professionals. A key lever for interdisciplinary team working in primary care is to get professionals working together and to learn from each other in practice. However, the evidence base is limited as it does not reflect the experiences of all primary care professionals and it is primarily about the enactment of team working. We need to know much more about the experiences of the full network of primary care professionals regarding all aspects of implementation work. International

  16. Resonances, scattering theory and rigged Hilbert spaces

    International Nuclear Information System (INIS)

    Parravicini, G.; Gorini, V.; Sudarshan, E.C.G.

    1979-01-01

    The problem of decaying states and resonances is examined within the framework of scattering theory in a rigged Hilbert space formalism. The stationary free, in, and out eigenvectors of formal scattering theory, which have a rigorous setting in rigged Hilbert space, are considered to be analytic functions of the energy eigenvalue. The value of these analytic functions at any point of regularity, real or complex, is an eigenvector with eigenvalue equal to the position of the point. The poles of the eigenvector families give origin to other eigenvectors of the Hamiltonian; the singularities of the out eigenvector family are the same as those of the continued S matrix, so that resonances are seen as eigenvectors of the Hamiltonian with eigenvalue equal to their location in the complex energy plane. Cauchy theorem then provides for expansions in terms of complete sets of eigenvectors with complex eigenvalues of the Hamiltonian. Applying such expansions to the survival amplitude of a decaying state, one finds that resonances give discrete contributions with purely exponential time behavior; the background is of course present, but explicitly separated. The resolvent of the Hamiltonian, restricted to the nuclear space appearing in the rigged Hilbert space, can be continued across the absolutely continuous spectrum; the singularities of the continuation are the same as those of the out eigenvectors. The free, in and out eigenvectors with complex eigenvalues and those corresponding to resonances can be approximated by physical vectors in the Hilbert space, as plane waves can. The need for having some further physical information in addition to the specification of the total Hamiltonian is apparent in the proposed framework. The formalism is applied to the Lee-Friedrichs model. 48 references

  17. Repeated lysergic acid diethylamide in an animal model of depression: Normalisation of learning behaviour and hippocampal serotonin 5-HT2 signalling.

    Science.gov (United States)

    Buchborn, Tobias; Schröder, Helmut; Höllt, Volker; Grecksch, Gisela

    2014-06-01

    A re-balance of postsynaptic serotonin (5-HT) receptor signalling, with an increase in 5-HT1A and a decrease in 5-HT2A signalling, is a final common pathway multiple antidepressants share. Given that the 5-HT1A/2A agonist lysergic acid diethylamide (LSD), when repeatedly applied, selectively downregulates 5-HT2A, but not 5-HT1A receptors, one might expect LSD to similarly re-balance the postsynaptic 5-HT signalling. Challenging this idea, we use an animal model of depression specifically responding to repeated antidepressant treatment (olfactory bulbectomy), and test the antidepressant-like properties of repeated LSD treatment (0.13 mg/kg/d, 11 d). In line with former findings, we observe that bulbectomised rats show marked deficits in active avoidance learning. These deficits, similarly as we earlier noted with imipramine, are largely reversed by repeated LSD administration. Additionally, bulbectomised rats exhibit distinct anomalies of monoamine receptor signalling in hippocampus and/or frontal cortex; from these, only the hippocampal decrease in 5-HT2 related [(35)S]-GTP-gamma-S binding is normalised by LSD. Importantly, the sham-operated rats do not profit from LSD, and exhibit reduced hippocampal 5-HT2 signalling. As behavioural deficits after bulbectomy respond to agents classified as antidepressants only, we conclude that the effect of LSD in this model can be considered antidepressant-like, and discuss it in terms of a re-balance of hippocampal 5-HT2/5-HT1A signalling. © The Author(s) 2014.

  18. Effect of food matrix and thermal processing on the performance of a normalised quantitative real-time PCR approach for lupine (Lupinus albus) detection as a potential allergenic food.

    Science.gov (United States)

    Villa, Caterina; Costa, Joana; Gondar, Cristina; Oliveira, M Beatriz P P; Mafra, Isabel

    2018-10-01

    Lupine is widely used as an ingredient in diverse food products, but it is also a source of allergens. This work aimed at proposing a method to detect/quantify lupine as an allergen in processed foods based on a normalised real-time PCR assay targeting the Lup a 4 allergen-encoding gene of Lupinus albus. Sensitivities down to 0.0005%, 0.01% and 0.05% (w/w) of lupine in rice flour, wheat flour and bread, respectively, and 1 pg of L. albus DNA were obtained, with adequate real-time PCR performance parameters using the ΔCt method. Both food matrix and processing affected negatively the quantitative performance of the assay. The method was successfully validated with blind samples and applied to processed foods. Lupine was estimated between 4.12 and 22.9% in foods, with some results suggesting the common practice of precautionary labelling. In this work, useful and effective tools were proposed for the detection/quantification of lupine in food products. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Implementing online consultations in primary care: a mixed-method evaluation extending normalisation process theory through service co-production.

    Science.gov (United States)

    Farr, Michelle; Banks, Jonathan; Edwards, Hannah B; Northstone, Kate; Bernard, Elly; Salisbury, Chris; Horwood, Jeremy

    2018-03-19

    To examine patient and staff views, experiences and acceptability of a UK primary care online consultation system and ask how the system and its implementation may be improved. Mixed-method evaluation of a primary care e-consultation system. Primary care practices in South West England. Qualitative interviews with 23 practice staff in six practices. Patient survey data for 756 e-consultations from 36 practices, with free-text survey comments from 512 patients, were analysed thematically. Anonymised patients' records were abstracted for 485 e-consultations from eight practices, including consultation types and outcomes. Descriptive statistics were used to analyse quantitative data. Analysis of implementation and the usage of the e-consultation system were informed by: (1) normalisation process theory, (2) a framework that illustrates how e-consultations were co-produced and (3) patients' and staff touchpoints. We found different expectations between patients and staff on how to use e-consultations 'appropriately'. While some patients used the system to try and save time for themselves and their general practitioners (GPs), some used e-consultations when they could not get a timely face-to-face appointment. Most e-consultations resulted in either follow-on phone (32%) or face-to-face appointments (38%) and GPs felt that this duplicated their workload. Patient satisfaction of the system was high, but a minority were dissatisfied with practice communication about their e-consultation. Where both patients and staff interact with technology, it is in effect 'co-implemented'. How patients used e-consultations impacted on practice staff's experiences and appraisal of the system. Overall, the e-consultation system studied could improve access for some patients, but in its current form, it was not perceived by practices as creating sufficient efficiencies to warrant financial investment. We illustrate how this e-consultation system and its implementation can be improved

  20. Understanding clinician attitudes towards implementation of guided self-help cognitive behaviour therapy for those who hear distressing voices: using factor analysis to test normalisation process theory.

    Science.gov (United States)

    Hazell, Cassie M; Strauss, Clara; Hayward, Mark; Cavanagh, Kate

    2017-07-24

    The Normalisation Process Theory (NPT) has been used to understand the implementation of physical health care interventions. The current study aims to apply the NPT model to a secondary mental health context, and test the model using exploratory factor analysis. This study will consider the implementation of a brief cognitive behaviour therapy for psychosis (CBTp) intervention. Mental health clinicians were asked to complete a NPT-based questionnaire on the implementation of a brief CBTp intervention. All clinicians had experience of either working with the target client group or were able to deliver psychological therapies. In total, 201 clinicians completed the questionnaire. The results of the exploratory factor analysis found partial support for the NPT model, as three of the NPT factors were extracted: (1) coherence, (2) cognitive participation, and (3) reflexive monitoring. We did not find support for the fourth NPT factor (collective action). All scales showed strong internal consistency. Secondary analysis of these factors showed clinicians to generally support the implementation of the brief CBTp intervention. This study provides strong evidence for the validity of the three NPT factors extracted. Further research is needed to determine whether participants' level of seniority moderates factor extraction, whether this factor structure can be generalised to other healthcare settings, and whether pre-implementation attitudes predict actual implementation outcomes.

  1. Assessing the facilitators and barriers of interdisciplinary team working in primary care using normalisation process theory: An integrative review

    Science.gov (United States)

    O’Reilly, Pauline; Lee, Siew Hwa; O’Sullivan, Madeleine; Cullen, Walter; Kennedy, Catriona; MacFarlane, Anne

    2017-01-01

    Background Interdisciplinary team working is of paramount importance in the reform of primary care in order to provide cost-effective and comprehensive care. However, international research shows that it is not routine practice in many healthcare jurisdictions. It is imperative to understand levers and barriers to the implementation process. This review examines interdisciplinary team working in practice, in primary care, from the perspective of service providers and analyses 1 barriers and facilitators to implementation of interdisciplinary teams in primary care and 2 the main research gaps. Methods and findings An integrative review following the PRISMA guidelines was conducted. Following a search of 10 international databases, 8,827 titles were screened for relevance and 49 met the criteria. Quality of evidence was appraised using predetermined criteria. Data were analysed following the principles of framework analysis using Normalisation Process Theory (NPT), which has four constructs: sense making, enrolment, enactment, and appraisal. The literature is dominated by a focus on interdisciplinary working between physicians and nurses. There is a dearth of evidence about all NPT constructs apart from enactment. Physicians play a key role in encouraging the enrolment of others in primary care team working and in enabling effective divisions of labour in the team. The experience of interdisciplinary working emerged as a lever for its implementation, particularly where communication and respect were strong between professionals. Conclusion A key lever for interdisciplinary team working in primary care is to get professionals working together and to learn from each other in practice. However, the evidence base is limited as it does not reflect the experiences of all primary care professionals and it is primarily about the enactment of team working. We need to know much more about the experiences of the full network of primary care professionals regarding all aspects

  2. Symmorphosis through dietary regulation: a combinatorial role for proteolysis, autophagy and protein synthesis in normalising muscle metabolism and function of hypertrophic mice after acute starvation.

    Directory of Open Access Journals (Sweden)

    Henry Collins-Hooper

    Full Text Available Animals are imbued with adaptive mechanisms spanning from the tissue/organ to the cellular scale which insure that processes of homeostasis are preserved in the landscape of size change. However we and others have postulated that the degree of adaptation is limited and that once outside the normal levels of size fluctuations, cells and tissues function in an aberant manner. In this study we examine the function of muscle in the myostatin null mouse which is an excellent model for hypertrophy beyond levels of normal growth and consequeces of acute starvation to restore mass. We show that muscle growth is sustained through protein synthesis driven by Serum/Glucocorticoid Kinase 1 (SGK1 rather than Akt1. Furthermore our metabonomic profiling of hypertrophic muscle shows that carbon from nutrient sources is being channelled for the production of biomass rather than ATP production. However the muscle displays elevated levels of autophagy and decreased levels of muscle tension. We demonstrate the myostatin null muscle is acutely sensitive to changes in diet and activates both the proteolytic and autophagy programmes and shutting down protein synthesis more extensively than is the case for wild-types. Poignantly we show that acute starvation which is detrimental to wild-type animals is beneficial in terms of metabolism and muscle function in the myostatin null mice by normalising tension production.

  3. Acceleration techniques for the discrete ordinate method

    International Nuclear Information System (INIS)

    Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego; Trautmann, Thomas

    2013-01-01

    In this paper we analyze several acceleration techniques for the discrete ordinate method with matrix exponential and the small-angle modification of the radiative transfer equation. These techniques include the left eigenvectors matrix approach for computing the inverse of the right eigenvectors matrix, the telescoping technique, and the method of false discrete ordinate. The numerical simulations have shown that on average, the relative speedup of the left eigenvector matrix approach and the telescoping technique are of about 15% and 30%, respectively. -- Highlights: ► We presented the left eigenvector matrix approach. ► We analyzed the method of false discrete ordinate. ► The telescoping technique is applied for matrix operator method. ► Considered techniques accelerate the computations by 20% in average.

  4. Spectral Bisection with Two Eigenvectors

    Czech Academy of Sciences Publication Activity Database

    Rocha, Israel

    2017-01-01

    Roč. 61, August (2017), s. 1019-1025 ISSN 1571-0653 Institutional support: RVO:67985807 Keywords : graph partitioning * Laplacian matrix * Fiedler vector Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics

  5. SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

    Energy Technology Data Exchange (ETDEWEB)

    Dhou, S; Williams, C [Brigham and Women’s Hospital / Harvard Medical School, Boston, MA (United States); Ionascu, D [William Beaumont Hospital, Royal Oak, MI (United States); Lewis, J [University of California at Los Angeles, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported

  6. SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

    International Nuclear Information System (INIS)

    Dhou, S; Williams, C; Ionascu, D; Lewis, J

    2016-01-01

    Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported

  7. Morphology of the pancreas in type 2 diabetes: effect of weight loss with or without normalisation of insulin secretory capacity.

    Science.gov (United States)

    Al-Mrabeh, Ahmad; Hollingsworth, Kieren G; Steven, Sarah; Taylor, Roy

    2016-08-01

    This study was designed to establish whether the low volume and irregular border of the pancreas in type 2 diabetes would be normalised after reversal of diabetes. A total of 29 individuals with type 2 diabetes undertook a very low energy (very low calorie) diet for 8 weeks followed by weight maintenance for 6 months. Methods were established to quantify the pancreas volume and degree of irregularity of the pancreas border. Three-dimensional volume-rendering and fractal dimension (FD) analysis of the MRI-acquired images were employed, as was three-point Dixon imaging to quantify the fat content. There was no change in pancreas volume 6 months after reversal of diabetes compared with baseline (52.0 ± 4.9 cm(3) and 51.4 ± 4.5 cm(3), respectively; p = 0.69), nor was any volumetric change observed in the non-responders. There was an inverse relationship between the volume and fat content of the pancreas in the total study population (r =-0.50, p = 0.006). Reversal of diabetes was associated with an increase in irregularity of the pancreas borders between baseline and 8 weeks (FD 1.143 ± 0.013 and 1.169 ± 0.006, respectively; p = 0.05), followed by a decrease at 6 months (1.130 ± 0.012, p = 0.006). On the other hand, no changes in FD were seen in the non-reversed group. Restoration of normal insulin secretion did not increase the subnormal pancreas volume over 6 months in the study population. A significant change in irregularity of the pancreas borders occurred after acute weight loss only after reversal of diabetes. Pancreas morphology in type 2 diabetes may be prognostically important, and its relationship to change in beta cell function requires further study.

  8. Computational analysis of chain flexibility and fluctuations in Rhizomucor miehei lipase

    DEFF Research Database (Denmark)

    Peters, Günther H.J.; Bywater, R. P.

    1999-01-01

    We have performed molecular dynamics simulation of Rhizomucor miehei lipase (Rml) with explicit water molecules present. The simulation was carried out in periodic boundary conditions and conducted for 1.2 ns in order to determine the concerted protein dynamics and to examine how well the essential...... motions are preserved along the trajectory. Protein motions are extracted by means of the essential dynamics analysis method for different lengths of the trajectory. Motions described by eigenvector 1 converge after approximately 200 ps and only small changes are observed with increasing simulation time....... Protein dynamics along eigenvectors with larger indices, however, change with simulation time and generally, with increasing eigenvector index, longer simulation times are required for observing similar protein motions (along a particular eigenvector). Several regions in the protein show relatively large...

  9. Assessment of the efficacy of a novel tailored vitamin K dosing regimen in lowering the International Normalised Ratio in over-anticoagulated patients: a randomised clinical trial.

    Science.gov (United States)

    Kampouraki, Emmanouela; Avery, Peter J; Wynne, Hilary; Biss, Tina; Hanley, John; Talks, Kate; Kamali, Farhad

    2017-09-01

    Current guidelines advocate using fixed-doses of oral vitamin K to reverse excessive anticoagulation in warfarinised patients who are either asymptomatic or have minor bleeds. Over-anticoagulated patients present with a wide range of International Normalised Ratio (INR) values and response to fixed doses of vitamin K varies. Consequently a significant proportion of patients remain outside their target INR after vitamin K administration, making them prone to either haemorrhage or thromboembolism. We compared the performance of a novel tailored vitamin K dosing regimen to that of a fixed-dose regimen with the primary measure being the proportion of over-anticoagulated patients returning to their target INR within 24 h. One hundred and eighty-one patients with an index INR > 6·0 (asymptomatic or with minor bleeding) were randomly allocated to receive oral administration of either a tailored dose (based upon index INR and body surface area) or a fixed-dose (1 or 2 mg) of vitamin K. A greater proportion of patients treated with the tailored dose returned to within target INR range compared to the fixed-dose regimen (68·9% vs. 52·8%; P = 0·026), whilst a smaller proportion of patients remained above target INR range (12·2% vs. 34·0%; P vitamin K dosing is more accurate than fixed-dose regimen in lowering INR to within target range in excessively anticoagulated patients. © 2017 John Wiley & Sons Ltd.

  10. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    Science.gov (United States)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  11. Deflation of Eigenvalues for GMRES in Lattice QCD

    International Nuclear Information System (INIS)

    Morgan, Ronald B.; Wilcox, Walter

    2002-01-01

    Versions of GMRES with deflation of eigenvalues are applied to lattice QCD problems. Approximate eigenvectors corresponding to the smallest eigenvalues are generated at the same time that linear equations are solved. The eigenvectors improve convergence for the linear equations, and they help solve other right-hand sides

  12. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Science.gov (United States)

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  13. Supporting the use of theory in cross-country health services research: a participatory qualitative approach using Normalisation Process Theory as an example.

    Science.gov (United States)

    O'Donnell, Catherine A; Mair, Frances S; Dowrick, Christopher; Brún, Mary O'Reilly-de; Brún, Tomas de; Burns, Nicola; Lionis, Christos; Saridaki, Aristoula; Papadakaki, Maria; Muijsenbergh, Maria van den; Weel-Baumgarten, Evelyn van; Gravenhorst, Katja; Cooper, Lucy; Princz, Christine; Teunissen, Erik; Mareeuw, Francine van den Driessen; Vlahadi, Maria; Spiegel, Wolfgang; MacFarlane, Anne

    2017-08-21

    To describe and reflect on the process of designing and delivering a training programme supporting the use of theory, in this case Normalisation Process Theory (NPT), in a multisite cross-country health services research study. Participatory research approach using qualitative methods. Six European primary care settings involving research teams from Austria, England, Greece, Ireland, The Netherlands and Scotland. RESTORE research team consisting of 8 project applicants, all senior primary care academics, and 10 researchers. Professional backgrounds included general practitioners/family doctors, social/cultural anthropologists, sociologists and health services/primary care researchers. Views of all research team members (n=18) were assessed using qualitative evaluation methods, analysed qualitatively by the trainers after each session. Most of the team had no experience of using NPT and many had not applied theory to prospective, qualitative research projects. Early training proved didactic and overloaded participants with information. Drawing on RESTORE's methodological approach of Participatory Learning and Action, workshops using role play, experiential interactive exercises and light-hearted examples not directly related to the study subject matter were developed. Evaluation showed the study team quickly grew in knowledge and confidence in applying theory to fieldwork.Recommendations applicable to other studies include: accepting that theory application is not a linear process, that time is needed to address researcher concerns with the process, and that experiential, interactive learning is a key device in building conceptual and practical knowledge. An unanticipated benefit was the smooth transition to cross-country qualitative coding of study data. A structured programme of training enhanced and supported the prospective application of a theory, NPT, to our work but raised challenges. These were not unique to NPT but could arise with the application of any

  14. A robust multilevel simultaneous eigenvalue solver

    Science.gov (United States)

    Costiner, Sorin; Taasan, Shlomo

    1993-01-01

    Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.

  15. Low complexity non-iterative coordinated beamforming in 2-user broadcast channels

    KAUST Repository

    Park, Kihong

    2010-10-01

    We propose a new non-iterative coordinated beamforming scheme to obtain full multiplexing gain in 2-user MIMO systems. In order to find the beamforming and combining matrices, we solve a generalized eigenvector problem and describe how to find generalized eigenvectors according to the Gaussian broadcast channels. Selected simulation results show that the proposed method yields the same sum-rate performance as the iterative coordinated beamforming method, while maintaining lower complexity by non-iterative computation of the beamforming and combining matrices. We also show that the proposed method can easily exploit selective gain by choosing the best combination of generalized eigenvectors. © 2006 IEEE.

  16. Low complexity non-iterative coordinated beamforming in 2-user broadcast channels

    KAUST Repository

    Park, Kihong; Ko, Youngchai; Alouini, Mohamed-Slim

    2010-01-01

    We propose a new non-iterative coordinated beamforming scheme to obtain full multiplexing gain in 2-user MIMO systems. In order to find the beamforming and combining matrices, we solve a generalized eigenvector problem and describe how to find generalized eigenvectors according to the Gaussian broadcast channels. Selected simulation results show that the proposed method yields the same sum-rate performance as the iterative coordinated beamforming method, while maintaining lower complexity by non-iterative computation of the beamforming and combining matrices. We also show that the proposed method can easily exploit selective gain by choosing the best combination of generalized eigenvectors. © 2006 IEEE.

  17. Riesz basis for strongly continuous groups.

    NARCIS (Netherlands)

    Zwart, Heiko J.

    Given a Hilbert space and the generator of a strongly continuous group on this Hilbert space. If the eigenvalues of the generator have a uniform gap, and if the span of the corresponding eigenvectors is dense, then these eigenvectors form a Riesz basis (or unconditional basis) of the Hilbert space.

  18. Multiscale finite element methods for high-contrast problems using local spectral basis functions

    KAUST Repository

    Efendiev, Yalchin

    2011-02-01

    In this paper we study multiscale finite element methods (MsFEMs) using spectral multiscale basis functions that are designed for high-contrast problems. Multiscale basis functions are constructed using eigenvectors of a carefully selected local spectral problem. This local spectral problem strongly depends on the choice of initial partition of unity functions. The resulting space enriches the initial multiscale space using eigenvectors of local spectral problem. The eigenvectors corresponding to small, asymptotically vanishing, eigenvalues detect important features of the solutions that are not captured by initial multiscale basis functions. Multiscale basis functions are constructed such that they span these eigenfunctions that correspond to small, asymptotically vanishing, eigenvalues. We present a convergence study that shows that the convergence rate (in energy norm) is proportional to (H/Λ*)1/2, where Λ* is proportional to the minimum of the eigenvalues that the corresponding eigenvectors are not included in the coarse space. Thus, we would like to reach to a larger eigenvalue with a smaller coarse space. This is accomplished with a careful choice of initial multiscale basis functions and the setup of the eigenvalue problems. Numerical results are presented to back-up our theoretical results and to show higher accuracy of MsFEMs with spectral multiscale basis functions. We also present a hierarchical construction of the eigenvectors that provides CPU savings. © 2010.

  19. Perfect observables for the hierarchical non-linear O(N)-invariant σ-model

    International Nuclear Information System (INIS)

    Wieczerkowski, C.; Xylander, Y.

    1995-05-01

    We compute moving eigenvalues and the eigenvectors of the linear renormalization group transformation for observables along the renormalized trajectory of the hierarchical non-linear O(N)-invariant σ-model by means of perturbation theory in the running coupling constant. Moving eigenvectors are defined as solutions to a Callan-Symanzik type equation. (orig.)

  20. Symmetric normalisation for intuitionistic logic

    DEFF Research Database (Denmark)

    Guenot, Nicolas; Straßburger, Lutz

    2014-01-01

    We present two proof systems for implication-only intuitionistic logic in the calculus of structures. The first is a direct adaptation of the standard sequent calculus to the deep inference setting, and we describe a procedure for cut elimination, similar to the one from the sequent calculus......, but using a non-local rewriting. The second system is the symmetric completion of the first, as normally given in deep inference for logics with a DeMorgan duality: all inference rules have duals, as cut is dual to the identity axiom. We prove a generalisation of cut elimination, that we call symmetric...

  1. Forced normalisation precipitated by lamotrigine.

    Science.gov (United States)

    Clemens, Béla

    2005-10-01

    To report two patients with lamotrigine-induced forced normalization (FN). Evaluation of the patient files, EEG, and video-EEG records, with special reference to the parallel clinical and EEG changes before, during, and after FN. This is the first documented report of lamotrigine-induced FN. The two epileptic patients (one of them was a 10-year-old girl) were successfully treated with lamotrigine. Their seizures ceased and interictal epileptiform events disappeared from the EEG record. Simultaneously, the patients displayed de novo occurrence of psychopathologic manifestations and disturbed behaviour. Reduction of the daily dose of LTG led to disappearance of the psychopathological symptoms and reappearance of the spikes but not the seizures. Lamotrigine may precipitate FN in adults and children. Analysis of the cases showed that lamotrigine-induced FN is a dose-dependent phenomenon and can be treated by reduction of the daily dose of the drug.

  2. A formative evaluation of the implementation of a medication safety data collection tool in English healthcare settings: A qualitative interview study using normalisation process theory.

    Science.gov (United States)

    Rostami, Paryaneh; Ashcroft, Darren M; Tully, Mary P

    2018-01-01

    Reducing medication-related harm is a global priority; however, impetus for improvement is impeded as routine medication safety data are seldom available. Therefore, the Medication Safety Thermometer was developed within England's National Health Service. This study aimed to explore the implementation of the tool into routine practice from users' perspectives. Fifteen semi-structured interviews were conducted with purposely sampled National Health Service staff from primary and secondary care settings. Interview data were analysed using an initial thematic analysis, and subsequent analysis using Normalisation Process Theory. Secondary care staff understood that the Medication Safety Thermometer's purpose was to measure medication safety and improvement. However, other uses were reported, such as pinpointing poor practice. Confusion about its purpose existed in primary care, despite further training, suggesting unsuitability of the tool. Decreased engagement was displayed by staff less involved with medication use, who displayed less ownership. Nonetheless, these advocates often lacked support from management and frontline levels, leading to an overall lack of engagement. Many participants reported efforts to drive scale-up of the use of the tool, for example, by securing funding, despite uncertainty around how to use data. Successful improvement was often at ward-level and went unrecognised within the wider organisation. There was mixed feedback regarding the value of the tool, often due to a perceived lack of "capacity". However, participants demonstrated interest in learning how to use their data and unexpected applications of data were reported. Routine medication safety data collection is complex, but achievable and facilitates improvements. However, collected data must be analysed, understood and used for further work to achieve improvement, which often does not happen. The national roll-out of the tool has accelerated shared learning; however, a number of

  3. A formative evaluation of the implementation of a medication safety data collection tool in English healthcare settings: A qualitative interview study using normalisation process theory.

    Directory of Open Access Journals (Sweden)

    Paryaneh Rostami

    Full Text Available Reducing medication-related harm is a global priority; however, impetus for improvement is impeded as routine medication safety data are seldom available. Therefore, the Medication Safety Thermometer was developed within England's National Health Service. This study aimed to explore the implementation of the tool into routine practice from users' perspectives.Fifteen semi-structured interviews were conducted with purposely sampled National Health Service staff from primary and secondary care settings. Interview data were analysed using an initial thematic analysis, and subsequent analysis using Normalisation Process Theory.Secondary care staff understood that the Medication Safety Thermometer's purpose was to measure medication safety and improvement. However, other uses were reported, such as pinpointing poor practice. Confusion about its purpose existed in primary care, despite further training, suggesting unsuitability of the tool. Decreased engagement was displayed by staff less involved with medication use, who displayed less ownership. Nonetheless, these advocates often lacked support from management and frontline levels, leading to an overall lack of engagement. Many participants reported efforts to drive scale-up of the use of the tool, for example, by securing funding, despite uncertainty around how to use data. Successful improvement was often at ward-level and went unrecognised within the wider organisation. There was mixed feedback regarding the value of the tool, often due to a perceived lack of "capacity". However, participants demonstrated interest in learning how to use their data and unexpected applications of data were reported.Routine medication safety data collection is complex, but achievable and facilitates improvements. However, collected data must be analysed, understood and used for further work to achieve improvement, which often does not happen. The national roll-out of the tool has accelerated shared learning; however

  4. Estimation and calibration of observation impact signals using the Lanczos method in NOAA/NCEP data assimilation system

    Directory of Open Access Journals (Sweden)

    M. Wei

    2012-09-01

    Full Text Available Despite the tremendous progress that has been made in data assimilation (DA methodology, observing systems that reduce observation errors, and model improvements that reduce background errors, the analyses produced by the best available DA systems are still different from the truth. Analysis error and error covariance are important since they describe the accuracy of the analyses, and are directly related to the future forecast errors, i.e., the forecast quality. In addition, analysis error covariance is critically important in building an efficient ensemble forecast system (EFS.

    Estimating analysis error covariance in an ensemble-based Kalman filter DA is straightforward, but it is challenging in variational DA systems, which have been in operation at most NWP (Numerical Weather Prediction centers. In this study, we use the Lanczos method in the NCEP (the National Centers for Environmental Prediction Gridpoint Statistical Interpolation (GSI DA system to look into other important aspects and properties of this method that were not exploited before. We apply this method to estimate the observation impact signals (OIS, which are directly related to the analysis error variances. It is found that the smallest eigenvalue of the transformed Hessian matrix converges to one as the number of minimization iterations increases. When more observations are assimilated, the convergence becomes slower and more eigenvectors are needed to retrieve the observation impacts. It is also found that the OIS over data-rich regions can be represented by the eigenvectors with dominant eigenvalues.

    Since only a limited number of eigenvectors can be computed due to computational expense, the OIS is severely underestimated, and the analysis error variance is consequently overestimated. It is found that the mean OIS values for temperature and wind components at typical model levels are increased by about 1.5 times when the number of eigenvectors is doubled

  5. Homogenization of the critically spectral equation in neutron transport

    International Nuclear Information System (INIS)

    Allaire, G.; Paris-6 Univ., 75; Bal, G.

    1998-01-01

    We address the homogenization of an eigenvalue problem for the neutron transport equation in a periodic heterogeneous domain, modeling the criticality study of nuclear reactor cores. We prove that the neutron flux, corresponding to the first and unique positive eigenvector, can be factorized in the product of two terms, up to a remainder which goes strongly to zero with the period. On terms is the first eigenvector of the transport equation in the periodicity cell. The other term is the first eigenvector of a diffusion equation in the homogenized domain. Furthermore, the corresponding eigenvalue gives a second order corrector for the eigenvalue of the heterogeneous transport problem. This result justifies and improves the engineering procedure used in practice for nuclear reactor cores computations. (author)

  6. Homogenization of the critically spectral equation in neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Allaire, G. [CEA Saclay, 91 - Gif-sur-Yvette (France). Dept. de Mecanique et de Technologie]|[Paris-6 Univ., 75 (France). Lab. d' Analyse Numerique; Bal, G. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches

    1998-07-01

    We address the homogenization of an eigenvalue problem for the neutron transport equation in a periodic heterogeneous domain, modeling the criticality study of nuclear reactor cores. We prove that the neutron flux, corresponding to the first and unique positive eigenvector, can be factorized in the product of two terms, up to a remainder which goes strongly to zero with the period. On terms is the first eigenvector of the transport equation in the periodicity cell. The other term is the first eigenvector of a diffusion equation in the homogenized domain. Furthermore, the corresponding eigenvalue gives a second order corrector for the eigenvalue of the heterogeneous transport problem. This result justifies and improves the engineering procedure used in practice for nuclear reactor cores computations. (author)

  7. Technological Forum

    CERN Multimedia

    Thievent; Zürrer; Hekimi; Cortesy; Reymond; Lecomte

    1988-01-01

    Partie 1: M. Thievent de l'association suisse de normalisation, ainsi que M.Alleyn, responsable de l'enseignement technique au Cern prennent la parole suivi d'une discussion (questions pas audibles, sifflements...) Partie 2: Exposé de M.Zürrer, président du comité européen de la normalisation, suivi de discussion. Partie 3: Groupes de travail (table ronde) avec 3 animateurs: M.Hekimi, sécrétaire générale de "l'european computer manufacturing association", M.Corthesy, chef du bureau de normalisation de Lausanne, M.Reymond, chef du bureau de normalisation EBC Secheron à Genève, suivi de discussion.

  8. Estimation of genetic parameters for test day records of dairy traits in the first three lactations

    Directory of Open Access Journals (Sweden)

    Ducrocq Vincent

    2005-05-01

    Full Text Available Abstract Application of test-day models for the genetic evaluation of dairy populations requires the solution of large mixed model equations. The size of the (covariance matrices required with such models can be reduced through the use of its first eigenvectors. Here, the first two eigenvectors of (covariance matrices estimated for dairy traits in first lactation were used as covariables to jointly estimate genetic parameters of the first three lactations. These eigenvectors appear to be similar across traits and have a biological interpretation, one being related to the level of production and the other to persistency. Furthermore, they explain more than 95% of the total genetic variation. Variances and heritabilities obtained with this model were consistent with previous studies. High correlations were found among production levels in different lactations. Persistency measures were less correlated. Genetic correlations between second and third lactations were close to one, indicating that these can be considered as the same trait. Genetic correlations within lactation were high except between extreme parts of the lactation. This study shows that the use of eigenvectors can reduce the rank of (covariance matrices for the test-day model and can provide consistent genetic parameters.

  9. The Perron-Frobenius Theorem for Markov Semigroups

    OpenAIRE

    Hijab, Omar

    2014-01-01

    Let $P^V_t$, $t\\ge0$, be the Schrodinger semigroup associated to a potential $V$ and Markov semigroup $P_t$, $t\\ge0$, on $C(X)$. Existence is established of a left eigenvector and right eigenvector corresponding to the spectral radius $e^{\\lambda_0t}$ of $P^V_t$, simultaneously for all $t\\ge0$. This is derived with no compactness assumption on the semigroup operators.

  10. Factors associated with failure to correct the international normalised ratio following fresh frozen plasma administration among patients treated for warfarin-related major bleeding. An analysis of electronic health records.

    Science.gov (United States)

    Menzin, J; White, L A; Friedman, M; Nichols, C; Menzin, J; Hoesche, J; Bergman, G E; Jones, C

    2012-04-01

    This study assessed the frequency and factors associated with failure to correct international normalised ratio (INR) in patients administered fresh frozen plasma (FFP) for warfarin-related major bleeding. This retrospective database analysis used electronic health records from an integrated health system. Patients who received FFP between 01/01/2004 and 01/31/2010, and who met the following criteria were selected: major haemorrhage diagnosis the day before to the day after initial FFP administration; INR ≥2 on the day before or the day of FFP and another INR result available; warfarin prescription within 90 days. INR correction (defined as INR ≤1.3) was evaluated at the last available test up to one day following FFP. A total of 414 patients met selection criteria (mean age 75 years, 53% male, mean Charlson score 2.5). Patients presented with gastrointestinal bleeding (58%), intracranial haemorrhage (38%) and other bleed types (4%). The INR of 67% of patients remained uncorrected at the last available test up to one day following receipt of FFP. In logistic regression analysis, the INR of patients who were older, those with a Charlson score of 4 or greater, and those with non-ICH bleeds (odds ratio vs. intracranial bleeding 0.48; 95% confidence interval 0.31-0.76) were more likely to remain uncorrected within one day following FFP administration. In an alternative definition of correction, (INR ≤1.5), 39% of patients' INRs remained uncorrected. For a substantial proportion of patients, the INRs remain inadequately or uncorrected following FFP administration, with estimates varying depending on the INR threshold used.

  11. Comparison of the CoaguChek XS handheld coagulation analyzer and conventional laboratory methods measuring international normalised ratio (INR) values during the time to therapeutic range after mechanical valve surgery.

    Science.gov (United States)

    Bardakci, Hasmet; Altıntaş, Garip; Çiçek, Omer Faruk; Kervan, Umit; Yilmaz, Sevinc; Kaplan, Sadi; Birincioglu, Cemal Levent

    2013-05-01

    To compare the international normalised ratio (INR) value of patients evaluated using the CoaguChek XS versus conventional laboratory methods, in the period after open-heart surgery for mechanical valve replacement until a therapeutic range is achieved using vitamin K antagonists (VKA) together with low molecular weight heparin (LMWH). One hundred and five patients undergoing open-heart surgery for mechanical valve replacement were enrolled. Blood samples were collected from patients before surgery, and on the second and fifth postoperative days, simultaneously for both the point of care device and conventional laboratory techniques. Patients were administered VKA together with LMWH at therapeutic doses (enoxaparin 100 IU/kg twice daily) subcutaneously, until an effective range was achieved on approximately the fifth day after surgery. The mean INR values using the CoaguChek XS preoperatively and on the second and fifth days postoperatively were 1.20 (SD ± 0.09), 1.82 (SD ± 0.45), and 2.55 (SD ± 0.55), respectively. Corresponding results obtained using conventional laboratory techniques were 1.18 (SD ± 0.1), 1.81 (SD ± 0.43), and 2.51 (SD ± 0.58). The correlation coefficient was r = 0.77 preoperatively, r = 0.981 on postoperative day 2, and r = 0.983 on postoperative day 5. Results using the CoaguChek XS Handheld Coagulation Analyzer correlated strongly with conventional laboratory methods, in the bridging period between open-heart surgery for mechanical valve replacement and the achievement of a therapeutic range on warfarin and LMWH. © 2013 Wiley Periodicals, Inc.

  12. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    Science.gov (United States)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  13. Semi-supervised Eigenvectors for Locally-biased Learning

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mahoney, Michael W.

    2012-01-01

    In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks "nearby" that pre-specified target region. Locally-biased problems of t...

  14. Drawing Space: Mathematicians' Kinetic Conceptions of Eigenvectors

    Science.gov (United States)

    Sinclair, Nathalie; Gol Tabaghi, Shiva

    2010-01-01

    This paper explores how mathematicians build meaning through communicative activity involving talk, gesture and diagram. In the course of describing mathematical concepts, mathematicians use these semiotic resources in ways that blur the distinction between the mathematical and physical world. We shall argue that mathematical meaning of…

  15. Random matrix approach to cross correlations in financial data

    Science.gov (United States)

    Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene

    2002-06-01

    We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices C of returns constructed from (i) 30-min returns of 1000 US stocks for the 2-yr period 1994-1995, (ii) 30-min returns of 881 US stocks for the 2-yr period 1996-1997, and (iii) 1-day returns of 422 US stocks for the 35-yr period 1962-1996. We test the statistics of the eigenvalues λi of C against a ``null hypothesis'' - a random correlation matrix constructed from mutually uncorrelated time series. We find that a majority of the eigenvalues of C fall within the RMT bounds [λ-,λ+] for the eigenvalues of random correlation matrices. We test the eigenvalues of C within the RMT bound for universal properties of random matrices and find good agreement with the results for the Gaussian orthogonal ensemble of random matrices-implying a large degree of randomness in the measured cross-correlation coefficients. Further, we find that the distribution of eigenvector components for the eigenvectors corresponding to the eigenvalues outside the RMT bound display systematic deviations from the RMT prediction. In addition, we find that these ``deviating eigenvectors'' are stable in time. We analyze the components of the deviating eigenvectors and find that the largest eigenvalue corresponds to an influence common to all stocks. Our analysis of the remaining deviating eigenvectors shows distinct groups, whose identities correspond to conventionally identified business sectors. Finally, we discuss applications to the construction of portfolios of stocks that have a stable ratio of risk to return.

  16. An additional bolus of rapid-acting insulin to normalise postprandial cardiovascular risk factors following a high-carbohydrate high-fat meal in patients with type 1 diabetes: A randomised controlled trial.

    Science.gov (United States)

    Campbell, Matthew D; Walker, Mark; Ajjan, Ramzi A; Birch, Karen M; Gonzalez, Javier T; West, Daniel J

    2017-07-01

    To evaluate an additional rapid-acting insulin bolus on postprandial lipaemia, inflammation and pro-coagulation following high-carbohydrate high-fat feeding in people with type 1 diabetes. A total of 10 males with type 1 diabetes [HbA 1c 52.5 ± 5.9 mmol/mol (7.0% ± 0.5%)] underwent three conditions: (1) a low-fat (LF) meal with normal bolus insulin, (2), a high-fat (HF) meal with normal bolus insulin and (3) a high-fat meal with normal bolus insulin with an additional 30% insulin bolus administered 3-h post-meal (HFA). Meals had identical carbohydrate and protein content and bolus insulin dose determined by carbohydrate-counting. Blood was sampled periodically for 6-h post-meal and analysed for triglyceride, non-esterified-fatty acids, apolipoprotein B48, glucagon, tumour necrosis factor alpha, fibrinogen, human tissue factor activity and plasminogen activator inhibitor-1. Continuous glucose monitoring captured interstitial glucose responses. Triglyceride concentrations following LF remained similar to baseline, whereas triglyceride levels following HF were significantly greater throughout the 6-h observation period. The additional insulin bolus (HFA) normalised triglyceride similarly to low fat 3-6 h following the meal. HF was associated with late postprandial elevations in tumour necrosis factor alpha, whereas LF and HFA was not. Fibrinogen, plasminogen activator inhibitor-1 and tissue factor pathway levels were similar between conditions. Additional bolus insulin 3 h following a high-carbohydrate high-fat meal prevents late rises in postprandial triglycerides and tumour necrosis factor alpha, thus improving cardiovascular risk profile.

  17. Molecular Mechanics and Quantum Chemistry Based Study of Nickel-N-Allyl Urea and N-Allyl Thiourea Complexes

    Directory of Open Access Journals (Sweden)

    P. D. Sharma

    2009-01-01

    Full Text Available Eigenvalue, eigenvector and overlap matrix of nickel halide complex of N-allyl urea and N-allyl thiourea have been evaluated. Our results indicate that ligand field parameters (Dq, B’ and β evaluated earlier by electronic spectra are very close to values evaluated with the help of eigenvalues and eigenvectors. Eigenvector analysis and population analysis shows that in bonding 4s, 4p, and 3dx2-y2, 3dyz orbitals of nickel are involved but the coefficient values differ in different complexes. Out of 4px, 4py, 4pz the involvement of either 4pz or 4py, is noticeable. The theoretically evaluated positions of infrared bands indicate that N-allyl urea is coordinated to nickel through its oxygen and N-allyl thiourea is coordinated to nickel through its sulphur which is in conformity with the experimental results.

  18. Collective Correlations of Brodmann Areas fMRI Study with RMT-Denoising

    Science.gov (United States)

    Burda, Z.; Kornelsen, J.; Nowak, M. A.; Porebski, B.; Sboto-Frankenstein, U.; Tomanek, B.; Tyburczyk, J.

    We study collective behavior of Brodmann regions of human cerebral cortex using functional Magnetic Resonance Imaging (fMRI) and Random Matrix Theory (RMT). The raw fMRI data is mapped onto the cortex regions corresponding to the Brodmann areas with the aid of the Talairach coordinates. Principal Component Analysis (PCA) of the Pearson correlation matrix for 41 different Brodmann regions is carried out to determine their collective activity in the idle state and in the active state stimulated by tapping. The collective brain activity is identified through the statistical analysis of the eigenvectors to the largest eigenvalues of the Pearson correlation matrix. The leading eigenvectors have a large participation ratio. This indicates that several Broadmann regions collectively give rise to the brain activity associated with these eigenvectors. We apply random matrix theory to interpret the underlying multivariate data.

  19. Analysis of experimental data: The average shape of extreme wave forces on monopile foundations and the NewForce model

    DEFF Research Database (Denmark)

    Schløer, Signe; Bredmose, Henrik; Ghadirian, Amin

    2017-01-01

    Experiments with a stiff pile subjected to extreme wave forces typical of offshore wind farm storm conditions are considered. The exceedance probability curves of the nondimensional force peaks and crest heights are analysed. The average force time history normalised with their peak values are co...... to the average shapes. For more nonlinear wave shapes, higher order terms has to be considered in order for the NewForce model to be able to predict the expected shapes.......Experiments with a stiff pile subjected to extreme wave forces typical of offshore wind farm storm conditions are considered. The exceedance probability curves of the nondimensional force peaks and crest heights are analysed. The average force time history normalised with their peak values...... are compared across the sea states. It is found that the force shapes show a clear similarity when grouped after the values of the normalised peak force, F/(ρghR2), normalised depth h/(gT2p) and presented in a normalised time scale t/Ta. For the largest force events, slamming can be seen as a distinct ‘hat...

  20. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Ammar Daskin

    2018-01-01

    The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the pha...

  1. Maternal supplementation with conjugated linoleic acid in the setting of diet-induced obesity normalises the inflammatory phenotype in mothers and reverses metabolic dysfunction and impaired insulin sensitivity in offspring.

    Science.gov (United States)

    Segovia, Stephanie A; Vickers, Mark H; Zhang, Xiaoyuan D; Gray, Clint; Reynolds, Clare M

    2015-12-01

    Maternal consumption of a high-fat diet significantly impacts the fetal environment and predisposes offspring to obesity and metabolic dysfunction during adulthood. We examined the effects of a high-fat diet during pregnancy and lactation on metabolic and inflammatory profiles and whether maternal supplementation with the anti-inflammatory lipid conjugated linoleic acid (CLA) could have beneficial effects on mothers and offspring. Sprague-Dawley rats were fed a control (CD; 10% kcal from fat), CLA (CLA; 10% kcal from fat, 1% total fat as CLA), high-fat (HF; 45% kcal from fat) or high fat with CLA (HFCLA; 45% kcal from fat, 1% total fat as CLA) diet ad libitum 10days prior to and throughout gestation and lactation. Dams and offspring were culled at either late gestation (fetal day 20, F20) or early postweaning (postnatal day 24, P24). CLA, HF and HFCLA dams were heavier than CD throughout gestation. Plasma concentrations of proinflammatory cytokines interleukin-1β and tumour necrosis factor-α were elevated in HF dams, with restoration in HFCLA dams. Male and female fetuses from HF dams were smaller at F20 but displayed catch-up growth and impaired insulin sensitivity at P24, which was reversed in HFCLA offspring. HFCLA dams at P24 were protected from impaired insulin sensitivity as compared to HF dams. Maternal CLA supplementation normalised inflammation associated with consumption of a high-fat diet and reversed associated programming of metabolic dysfunction in offspring. This demonstrates that there are critical windows of developmental plasticity in which the effects of an adverse early-life environment can be reversed by maternal dietary interventions. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. β-glucuronidase use as a single internal control gene may confound analysis in FMR1 mRNA toxicity studies.

    Science.gov (United States)

    Kraan, Claudine M; Cornish, Kim M; Bui, Quang M; Li, Xin; Slater, Howard R; Godler, David E

    2018-01-01

    Relationships between Fragile X Mental Retardation 1 (FMR1) mRNA levels in blood and intragenic FMR1 CGG triplet expansions support the pathogenic role of RNA gain of function toxicity in premutation (PM: 55-199 CGGs) related disorders. Real-time PCR (RT-PCR) studies reporting these findings normalised FMR1 mRNA level to a single internal control gene called β-glucuronidase (GUS). This study evaluated FMR1 mRNA-CGG correlations in 33 PM and 33 age- and IQ-matched control females using three normalisation strategies in peripheral blood mononuclear cells (PBMCs): (i) GUS as a single internal control; (ii) the mean of GUS, Eukaryotic Translation Initiation Factor 4A2 (EIF4A2) and succinate dehydrogenase complex flavoprotein subunit A (SDHA); and (iii) the mean of EIF4A2 and SDHA (with no contribution from GUS). GUS mRNA levels normalised to the mean of EIF4A2 and SDHA mRNA levels and EIF4A2/SDHA ratio were also evaluated. FMR1mRNA level normalised to the mean of EIF4A2 and SDHA mRNA levels, with no contribution from GUS, showed the most significant correlation with CGG size and the greatest difference between PM and control groups (p = 10-11). Only 15% of FMR1 mRNA PM results exceeded the maximum control value when normalised to GUS, compared with over 42% when normalised to the mean of EIF4A2 and SDHA mRNA levels. Neither GUS mRNA level normalised to the mean RNA levels of EIF4A2 and SDHA, nor to the EIF4A2/SDHA ratio were correlated with CGG size. However, greater variability in GUS mRNA levels were observed for both PM and control females across the full range of CGG repeat as compared to the EIF4A2/SDHA ratio. In conclusion, normalisation with multiple control genes, excluding GUS, can improve assessment of the biological significance of FMR1 mRNA-CGG size relationships.

  3. Optimisation of hardness and tensile strength of friction stir welded ...

    African Journals Online (AJOL)

    DR OKE

    adopted to develop mathematical model between the response and process parameters. .... Table 3 Normalised values and Deviational Sequence ... If the expectancy is the smaller the better, then the original sequence should be normalised ...

  4. Alternative psychosis (forced normalisation) in epilepsy

    African Journals Online (AJOL)

    changed, this should always be considered as a potential cause of a new or ... psychosis with thought disorder, delusions, hallucinations. • significant .... On mental status examination, the patient's behaviour was .... appeared for the first time.

  5. Alternative psychosis (forced normalisation in epilepsy

    Directory of Open Access Journals (Sweden)

    Vongani Titi Raymond Ntsanwisi

    2011-06-01

    Full Text Available Abstract Forced normalization is a paradoxical relationship between seizure activity and behavioural problems. A 20 year old male with recurrent refractory tonic clonic epilepsy experienced forced normalization, whilst on medication with multiple anti- epileptic drugs (AEDs.(Valproate Sodium, Carbamazepine, and Topiramate. A reduction in the seizure burden correlated with sudden behavioural changes manifesting with aggressive outbursts and violence.. The present case may help clarify the mechanism of forced normalization whilst providing some helpful hints regarding the diagnosis and treatment of symptoms observed in recurrent refractory seizures.

  6. Hints on the Broad Line Region Structure of Quasars at High and Low Luminosities

    Directory of Open Access Journals (Sweden)

    Marziani Paola

    2011-09-01

    Full Text Available Quasars show a considerable spectroscopic diversity. However, the variety of quasar spectra at low redshifts is non-random: a principal component analysis applied to large samples customarily identifies two main eigenvectors. In this contribution we show that the range of quasar optical spectral properties observed at low-z and associated with the first eigenvector is preserved up to z ≈ 2 in a sample of high luminosity quasars. We also describe two major luminosity effects.

  7. Mathematical methods linear algebra normed spaces distributions integration

    CERN Document Server

    Korevaar, Jacob

    1968-01-01

    Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector

  8. Quantum damped oscillator I: Dissipation and resonances

    International Nuclear Information System (INIS)

    Chruscinski, Dariusz; Jurkowski, Jacek

    2006-01-01

    Quantization of a damped harmonic oscillator leads to so called Bateman's dual system. The corresponding Bateman's Hamiltonian, being a self-adjoint operator, displays the discrete family of complex eigenvalues. We show that they correspond to the poles of energy eigenvectors and the corresponding resolvent operator when continued to the complex energy plane. Therefore, the corresponding generalized eigenvectors may be interpreted as resonant states which are responsible for the irreversible quantum dynamics of a damped harmonic oscillator

  9. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Daskin, Ammar

    2016-01-01

    The learning process for multi layered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow-Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, this iterative formulas result in terms formed by the principal components of the weight matrix: i.e., the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase...

  10. Algebraic structure of general electromagnetic fields and energy flow

    International Nuclear Information System (INIS)

    Hacyan, Shahen

    2011-01-01

    Highlights: → Algebraic structure of general electromagnetic fields in stationary spacetime. → Eigenvalues and eigenvectors of the electomagnetic field tensor. → Energy-momentum in terms of eigenvectors and Killing vector. → Explicit form of reference frame with vanishing Poynting vector. → Application of formalism to Bessel beams. - Abstract: The algebraic structures of a general electromagnetic field and its energy-momentum tensor in a stationary space-time are analyzed. The explicit form of the reference frame in which the energy of the field appears at rest is obtained in terms of the eigenvectors of the electromagnetic tensor and the existing Killing vector. The case of a stationary electromagnetic field is also studied and a comparison is made with the standard short-wave approximation. The results can be applied to the general case of a structured light beams, in flat or curved spaces. Bessel beams are worked out as example.

  11. Seismic network based detection, classification and location of volcanic tremors

    Science.gov (United States)

    Nikolai, S.; Soubestre, J.; Seydoux, L.; de Rosny, J.; Droznin, D.; Droznina, S.; Senyukov, S.; Gordeev, E.

    2017-12-01

    Volcanic tremors constitute an important attribute of volcanic unrest in many volcanoes, and their detection and characterization is a challenging issue of volcano monitoring. The main goal of the present work is to develop a network-based method to automatically classify volcanic tremors, to locate their sources and to estimate the associated wave speed. The method is applied to four and a half years of seismic data continuously recorded by 19 permanent seismic stations in the vicinity of the Klyuchevskoy volcanic group (KVG) in Kamchatka (Russia), where five volcanoes were erupting during the considered time period. The method is based on the analysis of eigenvalues and eigenvectors of the daily array covariance matrix. As a first step, following Seydoux et al. (2016), most coherent signals corresponding to dominating tremor sources are detected based on the width of the covariance matrix eigenvalues distribution. With this approach, the volcanic tremors of the two volcanoes known as most active during the considered period, Klyuchevskoy and Tolbachik, are efficiently detected. As a next step, we consider the array covariance matrix's first eigenvectors computed every day. The main hypothesis of our analysis is that these eigenvectors represent the principal component of the daily seismic wavefield and, for days with tremor activity, characterize the dominant tremor sources. Those first eigenvectors can therefore be used as network-based fingerprints of tremor sources. A clustering process is developed to analyze this collection of first eigenvectors, using correlation coefficient as a measure of their similarity. Then, we locate tremor sources based on cross-correlations amplitudes. We characterize seven tremor sources associated with different periods of activity of four volcanoes: Tolbachik, Klyuchevskoy, Shiveluch, and Kizimen. The developed method does not require a priori knowledge, is fully automatic and the database of network-based tremor fingerprints

  12. Influence of N-butylscopolamine on SUV in FDG PET of the bowel

    International Nuclear Information System (INIS)

    Sanghera, B.; Emmott, J.; Chambers, J.; Wong, W.L.; Wellsted, D.

    2009-01-01

    Peristalsis can lead to confusing fluorodeoxyglucose (FDG) positron emission tomography (PET) bowel uptake artefacts and potential for recording inaccurate mean standardised uptake value (SUV) measurements in PET-CT scans. Accordingly, we investigate the influence of different SUV normalisations on FDG PET uptake of the bowel and assess which one(s) have least dependence on body size factors in patients with and without the introduction of the anti-peristalsis agent N-butylscopolamine (Buscopan). This study consisted of 92 prospective oncology patients, each having a whole body 18 F-FDG PET scan. Correlations were investigated between height, weight, glucose, body mass index (bmi), lean body mass (lbm) and body surface area (bsa) with maximum and mean SUV recorded for bowel normalised to weight (SUV w ), lbm (SUV lbm ), bsa (SUV bsa ) and blood glucose corrected versions (SUV wg , SUV lbmg , SUV bsag ). Standardised uptake value normalisations were significantly different between control and Buscopan groups with less variability experienced within individual SUV normalisations by the administration of Buscopan. Mean SUV normalisations accounted for 80% of correlations in the control group and 100% in the Buscopan group. Further, >86% of all correlations across both groups were dominated by mean SUV normalisations of which, about 69% were accounted for by SUV bsa and SUV bsag . We recommend avoiding mean SUV bsa and individual glucose normalisations especially, mean SUV bsag as these dominated albeit relatively weak correlations with body size factors in control and Buscopan groups. Mean and maximum SUV w , and SUV lbm were shown to be independent of any body size parameters investigated in both groups and therefore considered suitable for monitoring FDG PET uptake in the normal bowel for our patient cohort. (author)

  13. J/$\\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\\sqrt{s_{\\rm NN}} = 5.02$ TeV

    OpenAIRE

    Adamová, D.; Aggarwal, Madan Mohan; Alam, Sk Noor; Biswas, Rathijit; Zardoshti, Nima; Zarochentsev, Andrey; Zavada, Petr; Zavyalov, Nikolay; Zbroszczyk, Hanna Paulina; Zhalov, Mikhail; Zhang, Haitao; Zhang, Xiaoming; Zhang, Yonghong; Chunhui, Zhang; Biswas, Saikat

    2018-01-01

    We report measurements of the inclusive J/ ψ yield and average transverse momentum as a function of charged-particle pseudorapidity density dNch/dη in p–Pb collisions at sNN=5.02TeV with ALICE at the LHC. The observables are normalised to their corresponding averages in non-single diffractive events. An increase of the normalised J/ ψ yield with normalised dNch/dη , measured at mid-rapidity, is observed at mid-rapidity and backward rapidity. At forward rapidity, a saturation of the relative y...

  14. Fresh frozen plasma versus prothrombin complex concentrate in patients with intracranial haemorrhage related to vitamin K antagonists (INCH)

    DEFF Research Database (Denmark)

    Steiner, Thorsten; Poli, Sven; Griebe, Martin

    2016-01-01

    BACKGROUND: Haematoma expansion is a major cause of mortality in intracranial haemorrhage related to vitamin K antagonists (VKA-ICH). Normalisation of the international normalised ratio (INR) is recommended, but optimum haemostatic management is controversial. We assessed the safety and efficacy ...

  15. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    Science.gov (United States)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  16. Inverse Problem for Two-Dimensional Discrete Schr`dinger Equation

    CERN Document Server

    Serdyukova, S I

    2000-01-01

    For two-dimensional discrete Schroedinger equation the boundary-value problem in rectangle M times N with zero boundary conditions is solved. It's stated in this work, that inverse problem reduces to reconstruction of C symmetric five-diagonal matrix with given spectrum and given first k(M,N), 1<-keigenvectors. C matrix has lacuna between the second and (N+1)-th diagonals. As a result the first N components of basic eigenvectors must satisfy (N-1)^2 (M-1) additional conditions and N conditions of compatibility. The elements of C together with "lacking" (N-k) components can be determined by solving the system of the additional conditions, the compatibility conditions and the orthonormality conditions coupled with relations determining elements of C matrix by eigenvalues and components of basic eigenvectors. We succeeded to clear the statement of the problem to the end in the process of concrete calculations. Deriving and solving the huge polynomial systems had been perfor...

  17. Functional brain connectivity is predictable from anatomic network's Laplacian eigen-structure.

    Science.gov (United States)

    Abdelnour, Farras; Dayan, Michael; Devinsky, Orrin; Thesen, Thomas; Raj, Ashish

    2018-05-15

    How structural connectivity (SC) gives rise to functional connectivity (FC) is not fully understood. Here we mathematically derive a simple relationship between SC measured from diffusion tensor imaging, and FC from resting state fMRI. We establish that SC and FC are related via (structural) Laplacian spectra, whereby FC and SC share eigenvectors and their eigenvalues are exponentially related. This gives, for the first time, a simple and analytical relationship between the graph spectra of structural and functional networks. Laplacian eigenvectors are shown to be good predictors of functional eigenvectors and networks based on independent component analysis of functional time series. A small number of Laplacian eigenmodes are shown to be sufficient to reconstruct FC matrices, serving as basis functions. This approach is fast, and requires no time-consuming simulations. It was tested on two empirical SC/FC datasets, and was found to significantly outperform generative model simulations of coupled neural masses. Copyright © 2018. Published by Elsevier Inc.

  18. On the convex closed set-valued operators in Banach spaces and their applications in control problems

    International Nuclear Information System (INIS)

    Vu Ngoc Phat; Jong Yeoul Park

    1995-10-01

    The paper studies a class of set-values operators with emphasis on properties of their adjoints and existence of eigenvalues and eigenvectors of infinite-dimensional convex closed set-valued operators. Sufficient conditions for existence of eigenvalues and eigenvectors of set-valued convex closed operators are derived. These conditions specify possible features of control problems. The results are applied to some constrained control problems of infinite-dimensional systems described by discrete-time inclusions whose right-hand-sides are convex closed set- valued functions. (author). 8 refs

  19. The Rabi Oscillation in Subdynamic System for Quantum Computing

    Directory of Open Access Journals (Sweden)

    Bi Qiao

    2015-01-01

    Full Text Available A quantum computation for the Rabi oscillation based on quantum dots in the subdynamic system is presented. The working states of the original Rabi oscillation are transformed to the eigenvectors of subdynamic system. Then the dissipation and decoherence of the system are only shown in the change of the eigenvalues as phase errors since the eigenvectors are fixed. This allows both dissipation and decoherence controlling to be easier by only correcting relevant phase errors. This method can be extended to general quantum computation systems.

  20. A Note on the Eigensystem of the Covariance Matrix of Dichotomous Guttman Items.

    Science.gov (United States)

    Davis-Stober, Clintin P; Doignon, Jean-Paul; Suck, Reinhard

    2015-01-01

    We consider the covariance matrix for dichotomous Guttman items under a set of uniformity conditions, and obtain closed-form expressions for the eigenvalues and eigenvectors of the matrix. In particular, we describe the eigenvalues and eigenvectors of the matrix in terms of trigonometric functions of the number of items. Our results parallel those of Zwick (1987) for the correlation matrix under the same uniformity conditions. We provide an explanation for certain properties of principal components under Guttman scalability which have been first reported by Guttman (1950).

  1. Noise Reduction in the Time Domain using Joint Diagonalization

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Benesty, Jacob; Jensen, Jesper Rindom

    2014-01-01

    , an estimate of the desired signal is found by subtraction of the noise estimate from the observed signal. The filter can be designed to obtain a desired trade-off between noise reduction and signal distortion, depending on the number of eigenvectors included in the filter design. This is explored through...... simulations using a speech signal corrupted by car noise, and the results confirm that the output signal-to-noise ratio and speech distortion index both increase when more eigenvectors are included in the filter design....

  2. Low-lying eigenmodes of the Wilson-Dirac operator and correlations with topological objects

    International Nuclear Information System (INIS)

    Kusterer, Daniel-Jens; Hedditch, John; Kamleh, Waseem; Leinweber, D.B.; Williams, Anthony G.

    2002-01-01

    The probability density of low-lying eigenvectors of the hermitian Wilson-Dirac operator H(κ)=γ 5 D W (κ) is examined. Comparisons in position and size between eigenvectors, topological charge and action density are made. We do this for standard Monte-Carlo generated SU(3) background fields and for single instanton background fields. Both hot and cooled SU(3) background fields are considered. An instanton model is fitted to eigenmodes and topological charge density and the sizes and positions of these are compared

  3. On the discrete Frobenius-Perron operator of the Bernoulli map

    International Nuclear Information System (INIS)

    Bai Zaiqiao

    2006-01-01

    We study the spectra of a finite-dimensional Frobenius-Perron operator (matrix) of the Bernoulli map derived from phase space discretization. The eigenvalues and (right and left) eigenvectors are analytically calculated, which are closely related to periodic orbits on the partition points. In the degenerate case, Jordan decomposition of the matrix is explicitly constructed. Except for the isolated eigenvalue 1, there is no definite limit with respect to eigenvalues when n → ∞. The behaviour of the eigenvectors is discussed in the limit of large n

  4. A comparison of working in small-scale and large-scale nursing homes: A systematic review of quantitative and qualitative evidence.

    Science.gov (United States)

    Vermeerbergen, Lander; Van Hootegem, Geert; Benders, Jos

    2017-02-01

    Ongoing shortages of care workers, together with an ageing population, make it of utmost importance to increase the quality of working life in nursing homes. Since the 1970s, normalised and small-scale nursing homes have been increasingly introduced to provide care in a family and homelike environment, potentially providing a richer work life for care workers as well as improved living conditions for residents. 'Normalised' refers to the opportunities given to residents to live in a manner as close as possible to the everyday life of persons not needing care. The study purpose is to provide a synthesis and overview of empirical research comparing the quality of working life - together with related work and health outcomes - of professional care workers in normalised small-scale nursing homes as compared to conventional large-scale ones. A systematic review of qualitative and quantitative studies. A systematic literature search (April 2015) was performed using the electronic databases Pubmed, Embase, PsycInfo, CINAHL and Web of Science. References and citations were tracked to identify additional, relevant studies. We identified 825 studies in the selected databases. After checking the inclusion and exclusion criteria, nine studies were selected for review. Two additional studies were selected after reference and citation tracking. Three studies were excluded after requesting more information on the research setting. The findings from the individual studies suggest that levels of job control and job demands (all but "time pressure") are higher in normalised small-scale homes than in conventional large-scale nursing homes. Additionally, some studies suggested that social support and work motivation are higher, while risks of burnout and mental strain are lower, in normalised small-scale nursing homes. Other studies found no differences or even opposing findings. The studies reviewed showed that these inconclusive findings can be attributed to care workers in some

  5. J/ψ production as a function of charged-particle pseudorapidity density in p–Pb collisions at sNN=5.02TeV

    Directory of Open Access Journals (Sweden)

    D. Adamová

    2018-01-01

    Full Text Available We report measurements of the inclusive J/ψ yield and average transverse momentum as a function of charged-particle pseudorapidity density dNch/dη in p–Pb collisions at sNN=5.02TeV with ALICE at the LHC. The observables are normalised to their corresponding averages in non-single diffractive events. An increase of the normalised J/ψ yield with normalised dNch/dη, measured at mid-rapidity, is observed at mid-rapidity and backward rapidity. At forward rapidity, a saturation of the relative yield is observed for high charged-particle multiplicities. The normalised average transverse momentum at forward and backward rapidities increases with multiplicity at low multiplicities and saturates beyond moderate multiplicities. In addition, the forward-to-backward nuclear modification factor ratio is also reported, showing an increasing suppression of J/ψ production at forward rapidity with respect to backward rapidity for increasing charged-particle multiplicity.

  6. J/ψ production as a function of charged-particle pseudorapidity density in p-Pb collisions at √{sNN } = 5.02TeV

    Science.gov (United States)

    Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, N.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altsybeev, I.; Alves Garcia Prado, C.; An, M.; Andrei, C.; Andrews, H. A.; Andronic, A.; Anguelov, V.; Anson, C.; Antičić, T.; Antinori, F.; Antonioli, P.; Anwar, R.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Ball, M.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barioglio, L.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Beltran, L. G. E.; Belyaev, V.; Bencedi, G.; Beole, S.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Blair, J. T.; Blau, D.; Blume, C.; Boca, G.; Bock, F.; Bogdanov, A.; Boldizsár, L.; Bombara, M.; Bonomi, G.; Bonora, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buhler, P.; Buitron, S. A. I.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Caines, H.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Capon, A. A.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cerello, P.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa Del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Costanza, S.; Crkovská, J.; Crochet, P.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; de, S.; de Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; de Falco, A.; de Gruttola, D.; De Marco, N.; de Pasquale, S.; de Souza, R. D.; Degenhardt, H. F.; Deisting, A.; Deloff, A.; Deplano, C.; Dhankher, P.; di Bari, D.; di Mauro, A.; di Nezza, P.; di Ruzza, B.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Duggal, A. K.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erhardt, F.; Espagnon, B.; Esumi, S.; Eulisse, G.; Eum, J.; Evans, D.; Evdokimov, S.; Fabbietti, L.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Francisco, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gajdosova, K.; Gallio, M.; Galvan, C. D.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Garg, K.; Garg, P.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Gay Ducati, M. B.; Germain, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Graham, K. L.; Greiner, L.; Grelli, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grion, N.; Gronefeld, J. M.; Grosa, F.; Grosse-Oetringhaus, J. F.; Grosso, R.; Gruber, L.; Grull, F. R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Guzman, I. B.; Haake, R.; Hadjidakis, C.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Herrmann, F.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Hladky, J.; Hohlweger, B.; Horak, D.; Hosokawa, R.; Hristov, P.; Hughes, C.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Ippolitov, M.; Irfan, M.; Isakov, V.; Islam, M. S.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacak, B.; Jacazio, N.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jaelani, S.; Jahnke, C.; Jakubowska, M. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jercic, M.; Jimenez Bustamante, R. T.; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Ketzer, B.; Mohisin Khan, M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Khatun, A.; Khuntia, A.; Kielbowicz, M. M.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, H.; Kim, J. S.; Kim, J.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Koyithatta Meethaleveedu, G.; Králik, I.; Kravčáková, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kundu, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lapidus, K.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lavicka, R.; Lazaridis, L.; Lea, R.; Leardini, L.; Lee, S.; Lehas, F.; Lehner, S.; Lehrbach, J.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Litichevskyi, V.; Ljunggren, H. M.; Llope, W. J.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Loncar, P.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lupi, M.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Mao, Y.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martin Blanco, J.; Martinengo, P.; Martinez, J. A. L.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Mathis, A. M.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzilli, M.; Mazzoni, M. A.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Mhlanga, S.; Miake, Y.; Mieskolainen, M. M.; Mihaylov, D. L.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Montes, E.; Moreira de Godoy, D. A.; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Münning, K.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Myers, C. J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Negrao de Oliveira, R. A.; Nellen, L.; Nesbo, S. V.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Ohlson, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Pachmayer, Y.; Pacik, V.; Pagano, D.; Pagano, P.; Paić, G.; Palni, P.; Pan, J.; Pandey, A. K.; Panebianco, S.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, J.; Park, W. J.; Parmar, S.; Passfeld, A.; Pathak, S. P.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Peng, X.; Pereira, L. G.; Pereira da Costa, H.; Peresunko, D.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Pezzi, R. P.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Poppenborg, H.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Pozdniakov, V.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Rana, D. B.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Ratza, V.; Ravasenga, I.; Read, K. F.; Redlich, K.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rodríguez Cahuantzi, M.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Rokita, P. S.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Rotondi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Rustamov, A.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Saha, S. K.; Sahlmuller, B.; Sahoo, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandoval, A.; Sarkar, D.; Sarkar, N.; Sarma, P.; Sas, M. H. P.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Scheid, H. S.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schmidt, M. O.; Schmidt, M.; Schuchmann, S.; Schukraft, J.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sett, P.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singhal, V.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Soramel, F.; Sorensen, S.; Sozzi, F.; Spiriti, E.; Sputowska, I.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Suzuki, K.; Swain, S.; Szabo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thakur, D.; Thakur, S.; Thomas, D.; Tieulent, R.; Tikhonov, A.; Timmins, A. R.; Toia, A.; Tripathy, S.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Trzeciak, B. A.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Umaka, E. N.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; van der Maarel, J.; van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vázquez Doce, O.; Vechernin, V.; Veen, A. M.; Velure, A.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Vértesi, R.; Vickovic, L.; Vigolo, S.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Virgili, T.; Vislavicius, V.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Voscek, D.; Vranic, D.; Vrláková, J.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Willems, G. A.; Williams, M. C. S.; Windelband, B.; Winn, M.; Witt, W. E.; Yalcin, S.; Yang, P.; Yano, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, X.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zimmermann, S.; Zinovjev, G.; Zmeskal, J.; Alice Collaboration<

    2018-01-01

    We report measurements of the inclusive J/ψ yield and average transverse momentum as a function of charged-particle pseudorapidity density dNch / dη in p-Pb collisions at √{sNN } = 5.02TeV with ALICE at the LHC. The observables are normalised to their corresponding averages in non-single diffractive events. An increase of the normalised J/ψ yield with normalised dNch / dη, measured at mid-rapidity, is observed at mid-rapidity and backward rapidity. At forward rapidity, a saturation of the relative yield is observed for high charged-particle multiplicities. The normalised average transverse momentum at forward and backward rapidities increases with multiplicity at low multiplicities and saturates beyond moderate multiplicities. In addition, the forward-to-backward nuclear modification factor ratio is also reported, showing an increasing suppression of J/ψ production at forward rapidity with respect to backward rapidity for increasing charged-particle multiplicity.

  7. A note on the eigensystem of the covariance matrix of dichotomous Guttman items

    Directory of Open Access Journals (Sweden)

    Clintin P Davis-Stober

    2015-12-01

    Full Text Available We consider the sample covariance matrix for dichotomous Guttman items under a set of uniformity conditions, and obtain closed-form expressions for the eigenvalues and eigenvectors of the matrix. In particular, we describe the eigenvalues and eigenvectors of the matrix in terms of trigonometric functions of the number of items. Our results parallel those of Zwick (1987 for the correlation matrix under the same uniformity conditions. We provide an explanation for certain properties of principal components under Guttman scalability which have been first reported by Guttman (1950.

  8. Spectral Analysis Methods of Social Networks

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2017-01-01

    Full Text Available Online social networks (such as Facebook, Twitter, VKontakte, etc. being an important channel for disseminating information are often used to arrange an impact on the social consciousness for various purposes - from advertising products or services to the full-scale information war thereby making them to be a very relevant object of research. The paper reviewed the analysis methods of social networks (primarily, online, based on the spectral theory of graphs. Such methods use the spectrum of the social graph, i.e. a set of eigenvalues of its adjacency matrix, and also the eigenvectors of the adjacency matrix.Described measures of centrality (in particular, centrality based on the eigenvector and PageRank, which reflect a degree of impact one or another user of the social network has. A very popular PageRank measure uses, as a measure of centrality, the graph vertices, the final probabilities of the Markov chain, whose matrix of transition probabilities is calculated on the basis of the adjacency matrix of the social graph. The vector of final probabilities is an eigenvector of the matrix of transition probabilities.Presented a method of dividing the graph vertices into two groups. It is based on maximizing the network modularity by computing the eigenvector of the modularity matrix.Considered a method for detecting bots based on the non-randomness measure of a graph to be computed using the spectral coordinates of vertices - sets of eigenvector components of the adjacency matrix of a social graph.In general, there are a number of algorithms to analyse social networks based on the spectral theory of graphs. These algorithms show very good results, but their disadvantage is the relatively high (albeit polynomial computational complexity for large graphs.At the same time it is obvious that the practical application capacity of the spectral graph theory methods is still underestimated, and it may be used as a basis to develop new methods.The work

  9. Download this PDF file

    African Journals Online (AJOL)

    Owner

    mRNA levels were expressed in relative copy number normalised against GADPH mRNA.This normalisation against the housekeeping gene is possible if both PCR (HO-1 gene +housekeeping gene) present in the same efficiency. All data are expressed as the mean value and its standard deviation. Kolmogorov smirnov to ...

  10. Maxwell meets Reeh–Schlieder: The quantum mechanics of neutral bosons

    Energy Technology Data Exchange (ETDEWEB)

    Hawton, Margaret, E-mail: margaret.hawton@lakeheadu.ca [Department of Physics, Lakehead University, Thunder Bay, ON, P7B 5E1 (Canada); Debierre, Vincent, E-mail: debierrev@mpi-hd.mpg.de [Max Planck Institute for Nuclear Physics, Saupfercheckweg 1, 69117, Heidelberg (Germany)

    2017-06-21

    We find that biorthogonal quantum mechanics with a scalar product that counts both absorbed and emitted particles leads to covariant position operators with localized eigenvectors. In this manifestly covariant formulation the probability for a transition from a one-photon state to a position eigenvector is the first order Glauber correlation function, bridging the gap between photon counting and the sensitivity of light detectors to electromagnetic energy density. The position eigenvalues are identified as the spatial parameters in the canonical quantum field operators and the position basis describes an array of localized devices that instantaneously absorb and re-emit bosons. - Highlights: • In biorthogonal quantum mechanics position operators are manifestly covariant and their eigenvectors are localized. • By including negative frequencies to give real fields our formalism escapes the no-go theorems. • Positive definite probability density exists locally but particles should be counted globally. • Relationships amongst photon probability, energy and current densities are local. • Use of the Newton Wigner basis should be limited to the calculation of expectation values.

  11. Efficient block preconditioned eigensolvers for linear response time-dependent density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Vecharynski, Eugene [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Brabec, Jiri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Shao, Meiyue [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Govind, Niranjan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Environmental Molecular Sciences Lab.; Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division

    2017-12-01

    We present two efficient iterative algorithms for solving the linear response eigen- value problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into a product eigenvalue problem that is self-adjoint with respect to a K-inner product. This product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. However, the other component of the eigenvector can be easily recovered in a postprocessing procedure. Therefore, the algorithms we present here are more efficient than existing algorithms that try to approximate both components of the eigenvectors simultaneously. The efficiency of the new algorithms is demonstrated by numerical examples.

  12. Maxwell meets Reeh–Schlieder: The quantum mechanics of neutral bosons

    International Nuclear Information System (INIS)

    Hawton, Margaret; Debierre, Vincent

    2017-01-01

    We find that biorthogonal quantum mechanics with a scalar product that counts both absorbed and emitted particles leads to covariant position operators with localized eigenvectors. In this manifestly covariant formulation the probability for a transition from a one-photon state to a position eigenvector is the first order Glauber correlation function, bridging the gap between photon counting and the sensitivity of light detectors to electromagnetic energy density. The position eigenvalues are identified as the spatial parameters in the canonical quantum field operators and the position basis describes an array of localized devices that instantaneously absorb and re-emit bosons. - Highlights: • In biorthogonal quantum mechanics position operators are manifestly covariant and their eigenvectors are localized. • By including negative frequencies to give real fields our formalism escapes the no-go theorems. • Positive definite probability density exists locally but particles should be counted globally. • Relationships amongst photon probability, energy and current densities are local. • Use of the Newton Wigner basis should be limited to the calculation of expectation values.

  13. Volatility Determination in an Ambit Process Setting

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Graversen, Svend-Erik

    The probability limit behaviour of normalised quadratic variation is studied for a simple tempo-spatial ambit process, with particular regard to the question of volatility memorylessness.......The probability limit behaviour of normalised quadratic variation is studied for a simple tempo-spatial ambit process, with particular regard to the question of volatility memorylessness....

  14. Treatment of dry age-related macular degeneration with dobesilate

    OpenAIRE

    Cuevas, P; Outeiriño, L A; Angulo, J; Giménez-Gallego, G

    2012-01-01

    The authors present anatomical and functional evidences of dry age-macular degeneration improvement, after intravitreal treatment with dobesilate. Main outcomes measures were normalisation of retinal structure and function, assessed by optical coherence tomography, fundus-monitored microperimetry, electrophysiology and visual acuity. The effect might be related to the normalisation of the outer retinal architecture.

  15. Network-Based Detection and Classification of Seismovolcanic Tremors: Example From the Klyuchevskoy Volcanic Group in Kamchatka

    Science.gov (United States)

    Soubestre, Jean; Shapiro, Nikolai M.; Seydoux, Léonard; de Rosny, Julien; Droznin, Dmitry V.; Droznina, Svetlana Ya.; Senyukov, Sergey L.; Gordeev, Evgeniy I.

    2018-01-01

    We develop a network-based method for detecting and classifying seismovolcanic tremors. The proposed approach exploits the coherence of tremor signals across the network that is estimated from the array covariance matrix. The method is applied to four and a half years of continuous seismic data recorded by 19 permanent seismic stations in the vicinity of the Klyuchevskoy volcanic group in Kamchatka (Russia), where five volcanoes were erupting during the considered time period. We compute and analyze daily covariance matrices together with their eigenvalues and eigenvectors. As a first step, most coherent signals corresponding to dominating tremor sources are detected based on the width of the covariance matrix eigenvalues distribution. Thus, volcanic tremors of the two volcanoes known as most active during the considered period, Klyuchevskoy and Tolbachik, are efficiently detected. As a next step, we consider the daily array covariance matrix's first eigenvector. Our main hypothesis is that these eigenvectors represent the principal components of the daily seismic wavefield and, for days with tremor activity, characterize dominant tremor sources. Those daily first eigenvectors, which can be used as network-based fingerprints of tremor sources, are then grouped into clusters using correlation coefficient as a measure of the vector similarity. As a result, we identify seven clusters associated with different periods of activity of four volcanoes: Tolbachik, Klyuchevskoy, Shiveluch, and Kizimen. The developed method does not require a priori knowledge and is fully automatic; and the database of the network-based tremor fingerprints can be continuously enriched with newly available data.

  16. Diffusion Forecasting Model with Basis Functions from QR-Decomposition

    Science.gov (United States)

    Harlim, John; Yang, Haizhao

    2017-12-01

    The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.

  17. Gait characteristics under different walking conditions: Association with the presence of cognitive impairment in community-dwelling older people.

    Directory of Open Access Journals (Sweden)

    Anne-Marie De Cock

    Full Text Available Gait characteristics measured at usual pace may allow profiling in patients with cognitive problems. The influence of age, gender, leg length, modified speed or dual tasking is unclear.Cross-sectional analysis was performed on a data registry containing demographic, physical and spatial-temporal gait parameters recorded in five walking conditions with a GAITRite® electronic carpet in community-dwelling older persons with memory complaints. Four cognitive stages were studied: cognitively healthy individuals, mild cognitive impaired patients, mild dementia patients and advanced dementia patients.The association between spatial-temporal gait characteristics and cognitive stages was the most prominent: in the entire study population using gait speed, steps per meter (translation for mean step length, swing time variability, normalised gait speed (corrected for leg length and normalised steps per meter at all five walking conditions; in the 50-to-70 years old participants applying step width at fast pace and steps per meter at usual pace; in the 70-to-80 years old persons using gait speed and normalised gait speed at usual pace, fast pace, animal walk and counting walk or steps per meter and normalised steps per meter at all five walking conditions; in over-80 years old participants using gait speed, normalised gait speed, steps per meter and normalised steps per meter at fast pace and animal dual-task walking. Multivariable logistic regression analysis adjusted for gender predicted in two compiled models the presence of dementia or cognitive impairment with acceptable accuracy in persons with memory complaints.Gait parameters in multiple walking conditions adjusted for age, gender and leg length showed a significant association with cognitive impairment. This study suggested that multifactorial gait analysis could be more informative than using gait analysis with only one test or one variable. Using this type of gait analysis in clinical practice

  18. Compact versus noncompact quantum dynamics of time-dependent su(1,1)-valued Hamiltonians

    International Nuclear Information System (INIS)

    Penna, V.

    1996-01-01

    We consider the Schroedinger problem for time-dependent (TD) Hamiltonians represented by a linear combination of the compact generator and the hyperbolic generator of su(1,1). Several types of transitions, characterized by different time initial conditions on the generator coefficients, are analyzed by resorting to the harmonic oscillator model with a frequency vanishing for t→+∞. We provide examples that point out how the TD states of the transitions can be constructed either by the compact eigenvector basis or by the noncompact eigenvector basis depending on the initial conditions characterizing the frequency time behavior. Copyright copyright 1996 Academic Press, Inc

  19. Multibaseline Observations of the Occultation of Crab Nebula by the ...

    Indian Academy of Sciences (India)

    tribpo

    Observations of the radio source Crab Nebula were made at the time of transit during. June 1986 and 1987. The fringe amplitude V(S) for a baseline S was calibrated using the corresponding baseline fringe amplitude of radio source 3C123 or 3C134 and normalised to the preoccultation value V(O). Normalised fringe ...

  20. Effective collateral circulation may indicate improved perfusion territory restoration after carotid endarterectomy.

    Science.gov (United States)

    Lin, Tianye; Lai, Zhichao; Lv, Yuelei; Qu, Jianxun; Zuo, Zhentao; You, Hui; Wu, Bing; Hou, Bo; Liu, Changwei; Feng, Feng

    2018-02-01

    To investigate the relationship between the level of collateral circulation and perfusion territory normalisation after carotid endarterectomy (CEA). This study enrolled 22 patients with severe carotid stenosis that underwent CEA and 54 volunteers without significant carotid stenosis. All patients were scanned with ASL and t-ASL within 1 month before and 1 week after CEA. Collateral circulation was assessed on preoperative ASL images based on the presence of ATA. The postoperative flow territories were considered as back to normal if they conformed to the perfusion territory map in a healthy population. Neuropsychological tests were performed on patients before and within 7 days after surgery. ATA-based collateral score assessed on preoperative ASL was significantly higher in the flow territory normalisation group (n=11, 50 %) after CEA (P mean differences+2SD among control (MMSE=1.35, MOCA=1.02)]. This study demonstrated that effective collateral flow in carotid stenosis patients was associated with normalisation of t-ASL perfusion territory after CEA. The perfusion territory normalisation group tends to have more cognitive improvement after CEA. • Evaluation of collaterals before CEA is helpful for avoiding ischaemia during clamping. • There was good agreement on ATA-based ASL collateral grading. • Perfusion territories in carotid stenosis patients are altered. • Patients have better collateral circulation with perfusion territory back to normal. • MMSE and MOCA test scores improved more in the territory normalisation group.

  1. Relationships between the Definition of the Hyperplane Width to the Fidelity of Principal Component Loading Patterns.

    Science.gov (United States)

    Richman, Michael B.; Gong, Xiaofeng

    1999-06-01

    When applying eigenanalysis, one decision analysts make is the determination of what magnitude an eigenvector coefficient (e.g., principal component (PC) loading) must achieve to be considered as physically important. Such coefficients can be displayed on maps or in a time series or tables to gain a fuller understanding of a large array of multivariate data. Previously, such a decision on what value of loading designates a useful signal (hereafter called the loading `cutoff') for each eigenvector has been purely subjective. The importance of selecting such a cutoff is apparent since those loading elements in the range of zero to the cutoff are ignored in the interpretation and naming of PCs since only the absolute values of loadings greater than the cutoff are physically analyzed. This research sets out to objectify the problem of best identifying the cutoff by application of matching between known correlation/covariance structures and their corresponding eigenpatterns, as this cutoff point (known as the hyperplane width) is varied.A Monte Carlo framework is used to resample at five sample sizes. Fourteen different hyperplane cutoff widths are tested, bootstrap resampled 50 times to obtain stable results. The key findings are that the location of an optimal hyperplane cutoff width (one which maximized the information content match between the eigenvector and the parent dispersion matrix from which it was derived) is a well-behaved unimodal function. On an individual eigenvector, this enables the unique determination of a hyperplane cutoff value to be used to separate those loadings that best reflect the relationships from those that do not. The effects of sample size on the matching accuracy are dramatic as the values for all solutions (i.e., unrotated, rotated) rose steadily from 25 through 250 observations and then weakly thereafter. The specific matching coefficients are useful to assess the penalties incurred when one analyzes eigenvector coefficients of a

  2. Targeting functional motifs of a protein family

    Science.gov (United States)

    Bhadola, Pradeep; Deo, Nivedita

    2016-10-01

    The structural organization of a protein family is investigated by devising a method based on the random matrix theory (RMT), which uses the physiochemical properties of the amino acid with multiple sequence alignment. A graphical method to represent protein sequences using physiochemical properties is devised that gives a fast, easy, and informative way of comparing the evolutionary distances between protein sequences. A correlation matrix associated with each property is calculated, where the noise reduction and information filtering is done using RMT involving an ensemble of Wishart matrices. The analysis of the eigenvalue statistics of the correlation matrix for the β -lactamase family shows the universal features as observed in the Gaussian orthogonal ensemble (GOE). The property-based approach captures the short- as well as the long-range correlation (approximately following GOE) between the eigenvalues, whereas the previous approach (treating amino acids as characters) gives the usual short-range correlations, while the long-range correlations are the same as that of an uncorrelated series. The distribution of the eigenvector components for the eigenvalues outside the bulk (RMT bound) deviates significantly from RMT observations and contains important information about the system. The information content of each eigenvector of the correlation matrix is quantified by introducing an entropic estimate, which shows that for the β -lactamase family the smallest eigenvectors (low eigenmodes) are highly localized as well as informative. These small eigenvectors when processed gives clusters involving positions that have well-defined biological and structural importance matching with experiments. The approach is crucial for the recognition of structural motifs as shown in β -lactamase (and other families) and selectively identifies the important positions for targets to deactivate (activate) the enzymatic actions.

  3. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  4. Exploration of the forbidden regions of the Ramachandran plot (ϕ-ψ) with QTAIM.

    Science.gov (United States)

    Momen, Roya; Azizi, Alireza; Wang, Lingling; Ping, Yang; Xu, Tianlv; Kirk, Steven R; Li, Wenxuan; Manzhos, Sergei; Jenkins, Samantha

    2017-10-04

    A new QTAIM interpretation of the Ramachandran plot is formulated from the most and least facile eigenvectors of the second-derivative matrix of the electron density with a set of 29 magainin-2 peptide conformers. The presence of QTAIM eigenvectors associated with the most and least preferred directions of electronic charge density explained the role of hydrogen bonding, HH contacts and the glycine amino acid monomer in peptide folding. The highest degree of occupation of the QTAIM interpreted Ramachandran plot was found for the glycine amino acid monomer compared with the remaining backbone peptide bonds. The mobility of the QTAIM eigenvectors of the glycine amino acid monomer was higher than for the other amino acids and was comparable to that of the hydrogen bonding, explaining the flexibility of the magainin-2 backbone. We experimented with a variety of hybrid QTAIM-Ramachandran plots to highlight and explain why the glycine amino acid monomer largely occupies the 'forbidden' region on the Ramachandran plot. In addition, the new hybrid QTAIM-Ramachandran plots contained recognizable regions that can be associated with concepts familiar from the conventional Ramachandran plot whilst retaining the character of the QTAIM most and least preferred regions.

  5. Principal component analysis of solar flares in the soft X-ray flux

    International Nuclear Information System (INIS)

    Teuber, D.L.; Reichmann, E.J.; Wilson, R.M.; National Aeronautics and Space Administration, Huntsville, AL

    1979-01-01

    Principal component analysis is a technique for extracting the salient features from a mass of data. It applies, in particular, to the analysis of nonstationary ensembles. Computational schemes for this task require the evaluation of eigenvalues of matrices. We have used EISPACK Matrix Eigen System Routines on an IBM 360-75 to analyze full-disk proportional-counter data from the X-ray event analyzer (X-REA) which was part of the Skylab ATM/S-056 experiment. Empirical orthogonal functions have been derived for events in the soft X-ray spectrum between 2.5 and 20 A during different time frames between June 1973 and January 1974. Results indicate that approximately 90% of the cumulative power of each analyzed flare is contained in the largest eigenvector. The first two largest eigenvectors are sufficient for an empirical curve-fit through the raw data and a characterization of solar flares in the soft X-ray flux. Power spectra of the two largest eigenvectors reveal a previously reported periodicity of approximately 5 min. Similar signatures were also obtained from flares that are synchronized on maximum pulse-height when subjected to a principal component analysis. (orig.)

  6. The diversity of quasars unified by accretion and orientation.

    Science.gov (United States)

    Shen, Yue; Ho, Luis C

    2014-09-11

    Quasars are rapidly accreting supermassive black holes at the centres of massive galaxies. They display a broad range of properties across all wavelengths, reflecting the diversity in the physical conditions of the regions close to the central engine. These properties, however, are not random, but form well-defined trends. The dominant trend is known as 'Eigenvector 1', in which many properties correlate with the strength of optical iron and [O III] emission. The main physical driver of Eigenvector 1 has long been suspected to be the quasar luminosity normalized by the mass of the hole (the 'Eddington ratio'), which is an important parameter of the black hole accretion process. But a definitive proof has been missing. Here we report an analysis of archival data that reveals that the Eddington ratio indeed drives Eigenvector 1. We also find that orientation plays a significant role in determining the observed kinematics of the gas in the broad-line region, implying a flattened, disk-like geometry for the fast-moving clouds close to the black hole. Our results show that most of the diversity of quasar phenomenology can be unified using two simple quantities: Eddington ratio and orientation.

  7. Unstable quantum states and rigged Hilbert spaces

    International Nuclear Information System (INIS)

    Gorini, V.; Parravicini, G.

    1978-10-01

    Rigged Hilbert space techniques are applied to the quantum mechanical treatment of unstable states in nonrelativistic scattering theory. A method is discussed which is based on representations of decay amplitudes in terms of expansions over complete sets of generalized eigenvectors of the interacting Hamiltonian, corresponding to complex eigenvalues. These expansions contain both a discrete and a continuum contribution. The former corresponds to eigenvalues located at the second sheet poles of the S matrix, and yields the exponential terms in the survival amplitude. The latter arises from generalized eigenvectors associated to complex eigenvalues on background contours in the complex plane, and gives the corrections to the exponential law. 27 references

  8. A Spectral Analysis of Discrete-Time Quantum Walks Related to the Birth and Death Chains

    Science.gov (United States)

    Ho, Choon-Lin; Ide, Yusuke; Konno, Norio; Segawa, Etsuo; Takumi, Kentaro

    2018-04-01

    In this paper, we consider a spectral analysis of discrete time quantum walks on the path. For isospectral coin cases, we show that the time averaged distribution and stationary distributions of the quantum walks are described by the pair of eigenvalues of the coins as well as the eigenvalues and eigenvectors of the corresponding random walks which are usually referred as the birth and death chains. As an example of the results, we derive the time averaged distribution of so-called Szegedy's walk which is related to the Ehrenfest model. It is represented by Krawtchouk polynomials which is the eigenvectors of the model and includes the arcsine law.

  9. A Perron–Frobenius theory for block matrices associated to a multiplex network

    International Nuclear Information System (INIS)

    Romance, Miguel; Solá, Luis; Flores, Julio; García, Esther; García del Amo, Alejandro; Criado, Regino

    2015-01-01

    The uniqueness of the Perron vector of a nonnegative block matrix associated to a multiplex network is discussed. The conclusions come from the relationships between the irreducibility of some nonnegative block matrix associated to a multiplex network and the irreducibility of the corresponding matrices to each layer as well as the irreducibility of the adjacency matrix of the projection network. In addition the computation of that Perron vector in terms of the Perron vectors of the blocks is also addressed. Finally we present the precise relations that allow to express the Perron eigenvector of the multiplex network in terms of the Perron eigenvectors of its layers

  10. A Perron-Frobenius theory for block matrices associated to a multiplex network

    Science.gov (United States)

    Romance, Miguel; Solá, Luis; Flores, Julio; García, Esther; García del Amo, Alejandro; Criado, Regino

    2015-03-01

    The uniqueness of the Perron vector of a nonnegative block matrix associated to a multiplex network is discussed. The conclusions come from the relationships between the irreducibility of some nonnegative block matrix associated to a multiplex network and the irreducibility of the corresponding matrices to each layer as well as the irreducibility of the adjacency matrix of the projection network. In addition the computation of that Perron vector in terms of the Perron vectors of the blocks is also addressed. Finally we present the precise relations that allow to express the Perron eigenvector of the multiplex network in terms of the Perron eigenvectors of its layers.

  11. Violating Bell inequalities maximally for two d-dimensional systems

    International Nuclear Information System (INIS)

    Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin

    2006-01-01

    We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information

  12. Instanton dominance of topological charge fluctuations in QCD?

    International Nuclear Information System (INIS)

    Hip, I.; Lippert, Th.; Schilling, K.; Schroers, W.; Neff, H.

    2002-01-01

    We consider the local chirality of near-zero eigenvectors from Wilson-Dirac and clover improved Wilson-Dirac lattice operators as proposed recently by Horvath et al. We study finer lattices and repair for the loss of orthogonality due to the non-normality of the Wilson-Dirac matrix. As a result we do see a clear double peak structure on lattices with resolutions higher than 0.1 fm. We find that the lattice artifacts can be considerably reduced by exploiting the biorthogonal system of left and right eigenvectors. We conclude that the dominance of instantons in topological charge fluctuations is not ruled out by local chirality measurements

  13. Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts

    Science.gov (United States)

    Wang, M.; Kamarianakis, Y.; Georgescu, M.

    2017-12-01

    A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.

  14. Eigenvector decomposition of full-spectrum x-ray computed tomography.

    Science.gov (United States)

    Gonzales, Brian J; Lalush, David S

    2012-03-07

    Energy-discriminated x-ray computed tomography (CT) data were projected onto a set of basis functions to suppress the noise in filtered back-projection (FBP) reconstructions. The x-ray CT data were acquired using a novel x-ray system which incorporated a single-pixel photon-counting x-ray detector to measure the x-ray spectrum for each projection ray. A matrix of the spectral response of different materials was decomposed using eigenvalue decomposition to form the basis functions. Projection of FBP onto basis functions created a de facto image segmentation of multiple contrast agents. Final reconstructions showed significant noise suppression while preserving important energy-axis data. The noise suppression was demonstrated by a marked improvement in the signal-to-noise ratio (SNR) along the energy axis for multiple regions of interest in the reconstructed images. Basis functions used on a more coarsely sampled energy axis still showed an improved SNR. We conclude that the noise-resolution trade off along the energy axis was significantly improved using the eigenvalue decomposition basis functions.

  15. Fast and robust multi-atlas segmentation of brain magnetic resonance images

    DEFF Research Database (Denmark)

    Lötjönen, Jyrki Mp; Wolz, Robin; Koikkalainen, Juha R

    2010-01-01

    We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead of stand......We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead...... of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity...

  16. Digital Particle Image Velocimetry: Partial Image Error (PIE)

    International Nuclear Information System (INIS)

    Anandarajah, K; Hargrave, G K; Halliwell, N A

    2006-01-01

    This paper quantifies the errors due to partial imaging of seeding particles which occur at the edges of interrogation regions in Digital Particle Image Velocimetry (DPIV). Hitherto, in the scientific literature the effect of these partial images has been assumed to be negligible. The results show that the error is significant even at a commonly used interrogation region size of 32 x 32 pixels. If correlation of interrogation region sizes of 16 x 16 pixels and smaller is attempted, the error which occurs can preclude meaningful results being obtained. In order to reduce the error normalisation of the correlation peak values is necessary. The paper introduces Normalisation by Signal Strength (NSS) as the preferred means of normalisation for optimum accuracy. In addition, it is shown that NSS increases the dynamic range of DPIV

  17. The Topology of Symmetric Tensor Fields

    Science.gov (United States)

    Levin, Yingmei; Batra, Rajesh; Hesselink, Lambertus; Levy, Yuval

    1997-01-01

    Combinatorial topology, also known as "rubber sheet geometry", has extensive applications in geometry and analysis, many of which result from connections with the theory of differential equations. A link between topology and differential equations is vector fields. Recent developments in scientific visualization have shown that vector fields also play an important role in the analysis of second-order tensor fields. A second-order tensor field can be transformed into its eigensystem, namely, eigenvalues and their associated eigenvectors without loss of information content. Eigenvectors behave in a similar fashion to ordinary vectors with even simpler topological structures due to their sign indeterminacy. Incorporating information about eigenvectors and eigenvalues in a display technique known as hyperstreamlines reveals the structure of a tensor field. The simplify and often complex tensor field and to capture its important features, the tensor is decomposed into an isotopic tensor and a deviator. A tensor field and its deviator share the same set of eigenvectors, and therefore they have a similar topological structure. A a deviator determines the properties of a tensor field, while the isotopic part provides a uniform bias. Degenerate points are basic constituents of tensor fields. In 2-D tensor fields, there are only two types of degenerate points; while in 3-D, the degenerate points can be characterized in a Q'-R' plane. Compressible and incompressible flows share similar topological feature due to the similarity of their deviators. In the case of the deformation tensor, the singularities of its deviator represent the area of vortex core in the field. In turbulent flows, the similarities and differences of the topology of the deformation and the Reynolds stress tensors reveal that the basic addie-viscosity assuptions have their validity in turbulence modeling under certain conditions.

  18. Nonlinear signaling on biological networks: The role of stochasticity and spectral clustering

    Science.gov (United States)

    Hernandez-Hernandez, Gonzalo; Myers, Jesse; Alvarez-Lacalle, Enrique; Shiferaw, Yohannes

    2017-03-01

    Signal transduction within biological cells is governed by networks of interacting proteins. Communication between these proteins is mediated by signaling molecules which bind to receptors and induce stochastic transitions between different conformational states. Signaling is typically a cooperative process which requires the occurrence of multiple binding events so that reaction rates have a nonlinear dependence on the amount of signaling molecule. It is this nonlinearity that endows biological signaling networks with robust switchlike properties which are critical to their biological function. In this study we investigate how the properties of these signaling systems depend on the network architecture. Our main result is that these nonlinear networks exhibit bistability where the network activity can switch between states that correspond to a low and high activity level. We show that this bistable regime emerges at a critical coupling strength that is determined by the spectral structure of the network. In particular, the set of nodes that correspond to large components of the leading eigenvector of the adjacency matrix determines the onset of bistability. Above this transition the eigenvectors of the adjacency matrix determine a hierarchy of clusters, defined by its spectral properties, which are activated sequentially with increasing network activity. We argue further that the onset of bistability occurs either continuously or discontinuously depending upon whether the leading eigenvector is localized or delocalized. Finally, we show that at low network coupling stochastic transitions to the active branch are also driven by the set of nodes that contribute more strongly to the leading eigenvector. However, at high coupling, transitions are insensitive to network structure since the network can be activated by stochastic transitions of a few nodes. Thus this work identifies important features of biological signaling networks that may underlie their biological

  19. Matrix product solution to multi-species ASEP with open boundaries

    Science.gov (United States)

    Finn, C.; Ragoucy, E.; Vanicat, M.

    2018-04-01

    We study a class of multi-species ASEP with open boundaries. The boundaries are chosen in such a way that all species of particles interact non-trivially with the boundaries, and are present in the stationary state. We give the exact expression of the stationary state in a matrix product form, and compute its normalisation. Densities and currents for the different species are then computed in terms of this normalisation.

  20. Coupling coefficients for tensor product representations of quantum SU(2)

    International Nuclear Information System (INIS)

    Groenevelt, Wolter

    2014-01-01

    We study tensor products of infinite dimensional irreducible * -representations (not corepresentations) of the SU(2) quantum group. We obtain (generalized) eigenvectors of certain self-adjoint elements using spectral analysis of Jacobi operators associated to well-known q-hypergeometric orthogonal polynomials. We also compute coupling coefficients between different eigenvectors corresponding to the same eigenvalue. Since the continuous spectrum has multiplicity two, the corresponding coupling coefficients can be considered as 2 × 2-matrix-valued orthogonal functions. We compute explicitly the matrix elements of these functions. The coupling coefficients can be considered as q-analogs of Bessel functions. As a results we obtain several q-integral identities involving q-hypergeometric orthogonal polynomials and q-Bessel-type functions

  1. Coupling coefficients for tensor product representations of quantum SU(2)

    Science.gov (United States)

    Groenevelt, Wolter

    2014-10-01

    We study tensor products of infinite dimensional irreducible *-representations (not corepresentations) of the SU(2) quantum group. We obtain (generalized) eigenvectors of certain self-adjoint elements using spectral analysis of Jacobi operators associated to well-known q-hypergeometric orthogonal polynomials. We also compute coupling coefficients between different eigenvectors corresponding to the same eigenvalue. Since the continuous spectrum has multiplicity two, the corresponding coupling coefficients can be considered as 2 × 2-matrix-valued orthogonal functions. We compute explicitly the matrix elements of these functions. The coupling coefficients can be considered as q-analogs of Bessel functions. As a results we obtain several q-integral identities involving q-hypergeometric orthogonal polynomials and q-Bessel-type functions.

  2. Finite-lattice form factors in free-fermion models

    International Nuclear Information System (INIS)

    Iorgov, N; Lisovyy, O

    2011-01-01

    We consider the general Z 2 -symmetric free-fermion model on the finite periodic lattice, which includes as special cases the Ising model on the square and triangular lattices and the Z n -symmetric BBS τ (2) -model with n = 2. Translating Kaufman's fermionic approach to diagonalization of Ising-like transfer matrices into the language of Grassmann integrals, we determine the transfer matrix eigenvectors and observe that they coincide with the eigenvectors of a square lattice Ising transfer matrix. This allows us to find exact finite-lattice form factors of spin operators for the statistical model and the associated finite-length quantum chains, of which the most general is equivalent to the XY chain in a transverse field

  3. Calculations of transient fields in the Felix experiments at Argonne using null field integrated techniques

    International Nuclear Information System (INIS)

    Han, H.C.; Davey, K.R.; Turner, L.

    1985-08-01

    The transient eddy current problem is characteristically computationally intensive. The motivation for this research was to realize an efficient, accurate, solution technique involving small matrices via an eigenvalue approach. Such a technique is indeed realized and tested using the null field integral technique. Using smart (i.e., efficient, global) basis functions to represent unknowns in terms of a minimum number of unknowns, homogeneous eigenvectors and eigenvalues are first determined. The general excitatory response is then represented in terms of these eigenvalues/eigenvectors. Excellent results are obtained for the Argonne Felix cylinder experiments using a 4 x 4 matrix. Extension to the 3-D problem (short cylinder) is set up in terms of an 8 x 8 matrix

  4. Optical spectra and lattice dynamics of molecular crystals

    CERN Document Server

    Zhizhin, GN

    1995-01-01

    The current volume is a single topic volume on the optical spectra and lattice dynamics of molecular crystals. The book is divided into two parts. Part I covers both the theoretical and experimental investigations of organic crystals. Part II deals with the investigation of the structure, phase transitions and reorientational motion of molecules in organic crystals. In addition appendices are given which provide the parameters for the calculation of the lattice dynamics of molecular crystals, procedures for the calculation of frequency eigenvectors of utilizing computers, and the frequencies and eigenvectors of lattice modes for several organic crystals. Quite a large amount of Russian literature is cited, some of which has previously not been available to scientists in the West.

  5. An improved V-Lambda solution of the matrix Riccati equation

    Science.gov (United States)

    Bar-Itzhack, Itzhack Y.; Markley, F. Landis

    1988-01-01

    The authors present an improved algorithm for computing the V-Lambda solution of the matrix Riccati equation. The improvement is in the reduction of the computational load, results from the orthogonality of the eigenvector matrix that has to be solved for. The orthogonality constraint reduces the number of independent parameters which define the matrix from n-squared to n (n - 1)/2. The authors show how to specify the parameters, how to solve for them and how to form from them the needed eigenvector matrix. In the search for suitable parameters, the analogy between the present problem and the problem of attitude determination is exploited, resulting in the choice of Rodrigues parameters.

  6. Method of locating related items in a geometric space for data mining

    Science.gov (United States)

    Hendrickson, Bruce A.

    1999-01-01

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity.

  7. Imaging Arterial Fibres Using Diffusion Tensor Imaging—Feasibility Study and Preliminary Results

    Directory of Open Access Journals (Sweden)

    Kerskens Christian

    2010-01-01

    Full Text Available Abstract MR diffusion tensor imaging (DTI was used to analyze the fibrous structure of aortic tissue. A fresh porcine aorta was imaged at 7T using a spin echo sequence with the following parameters: matrix 128 128 pixel; slice thickness 0.5 mm; interslice spacing 0.1 mm; number of slices 16; echo time 20.3 s; field of view 28 mm 28 mm. Eigenvectors from the diffusion tensor images were calculated for the central image slice and the averaged tensors and the eigenvector corresponding to the largest eigenvalue showed two distinct angles corresponding to near and to the transverse plane of the aorta. Fibre tractography within the aortic volume imaged confirmed that fibre angles were oriented helically with lead angles of and . The findings correspond to current histological and microscopy data on the fibrous structure of aortic tissue, and therefore the eigenvector maps and fibre tractography appear to reflect the alignment of the fibers in the aorta. In view of current efforts to develop noninvasive diagnostic tools for cardiovascular diseases, DTI may offer a technique to assess the structural properties of arterial tissue and hence any changes or degradation in arterial tissue.

  8. Volatility of an Indian stock market: A random matrix approach

    International Nuclear Information System (INIS)

    Kulkarni, V.; Deo, N.

    2006-07-01

    We examine volatility of an Indian stock market in terms of aspects like participation, synchronization of stocks and quantification of volatility using the random matrix approach. Volatility pattern of the market is found using the BSE index for the three-year period 2000- 2002. Random matrix analysis is carried out using daily returns of 70 stocks for several time windows of 85 days in 2001 to (i) do a brief comparative analysis with statistics of eigenvalues and eigenvectors of the matrix C of correlations between price fluctuations, in time regimes of different volatilities. While a bulk of eigenvalues falls within RMT bounds in all the time periods, we see that the largest (deviating) eigenvalue correlates well with the volatility of the index, the corresponding eigenvector clearly shows a shift in the distribution of its components from volatile to less volatile periods and verifies the qualitative association between participation and volatility (ii) observe that the Inverse participation ratio for the last eigenvector is sensitive to market fluctuations (the two quantities are observed to anti correlate significantly) (iii) set up a variability index, V whose temporal evolution is found to be significantly correlated with the volatility of the overall market index. MIRAMAR (author)

  9. Spectral decomposition of single-tone-driven quantum phase modulation

    International Nuclear Information System (INIS)

    Capmany, Jose; Fernandez-Pousa, Carlos R

    2011-01-01

    Electro-optic phase modulators driven by a single radio-frequency tone Ω can be described at the quantum level as scattering devices where input single-mode radiation undergoes energy changes in multiples of ℎΩ. In this paper, we study the spectral representation of the unitary, multimode scattering operator describing these devices. The eigenvalue equation, phase modulation being a process preserving the photon number, is solved at each subspace with definite number of photons. In the one-photon subspace F 1 , the problem is equivalent to the computation of the continuous spectrum of the Susskind-Glogower cosine operator of the harmonic oscillator. Using this analogy, the spectral decomposition in F 1 is constructed and shown to be equivalent to the usual Fock-space representation. The result is then generalized to arbitrary N-photon subspaces, where eigenvectors are symmetrized combinations of N one-photon eigenvectors and the continuous spectrum spans the entire unit circle. Approximate normalizable one-photon eigenstates are constructed in terms of London phase states truncated to optical bands. Finally, we show that synchronous ultrashort pulse trains represent classical field configurations with the same structure as these approximate eigenstates, and that they can be considered as approximate eigenvectors of the classical formulation of phase modulation.

  10. Spectral decomposition of single-tone-driven quantum phase modulation

    Energy Technology Data Exchange (ETDEWEB)

    Capmany, Jose [ITEAM Research Institute, Univ. Politecnica de Valencia, 46022 Valencia (Spain); Fernandez-Pousa, Carlos R, E-mail: c.pousa@umh.es [Signal Theory and Communications, Department of Physics and Computer Science, Univ. Miguel Hernandez, 03202 Elche (Spain)

    2011-02-14

    Electro-optic phase modulators driven by a single radio-frequency tone {Omega} can be described at the quantum level as scattering devices where input single-mode radiation undergoes energy changes in multiples of {h_bar}{Omega}. In this paper, we study the spectral representation of the unitary, multimode scattering operator describing these devices. The eigenvalue equation, phase modulation being a process preserving the photon number, is solved at each subspace with definite number of photons. In the one-photon subspace F{sub 1}, the problem is equivalent to the computation of the continuous spectrum of the Susskind-Glogower cosine operator of the harmonic oscillator. Using this analogy, the spectral decomposition in F{sub 1} is constructed and shown to be equivalent to the usual Fock-space representation. The result is then generalized to arbitrary N-photon subspaces, where eigenvectors are symmetrized combinations of N one-photon eigenvectors and the continuous spectrum spans the entire unit circle. Approximate normalizable one-photon eigenstates are constructed in terms of London phase states truncated to optical bands. Finally, we show that synchronous ultrashort pulse trains represent classical field configurations with the same structure as these approximate eigenstates, and that they can be considered as approximate eigenvectors of the classical formulation of phase modulation.

  11. Asymptotic Poisson distribution for the number of system failures of a monotone system

    International Nuclear Information System (INIS)

    Aven, Terje; Haukis, Harald

    1997-01-01

    It is well known that for highly available monotone systems, the time to the first system failure is approximately exponentially distributed. Various normalising factors can be used as the parameter of the exponential distribution to ensure the asymptotic exponentiality. More generally, it can be shown that the number of system failures is asymptotic Poisson distributed. In this paper we study the performance of some of the normalising factors by using Monte Carlo simulation. The results show that the exponential/Poisson distribution gives in general very good approximations for highly available components. The asymptotic failure rate of the system gives best results when the process is in steady state, whereas other normalising factors seem preferable when the process is not in steady state. From a computational point of view the asymptotic system failure rate is most attractive

  12. The effects of induction hardening on wear properties of AISI 4140 steel in dry sliding conditions

    International Nuclear Information System (INIS)

    Totik, Y.; Sadeler, R.; Altun, H.; Gavgali, M.

    2002-01-01

    Wear behaviour of induction hardened AISI 4140 steel was evaluated under dry sliding conditions. Specimens were induction hardened at 1000 Hz for 6, 10, 14, 18, 27 s, respectively, in the inductor which was a three-turn coil with a coupling distance of 2.8 mm. Normalised and induction hardened specimens were fully characterised before and after the wear testing using hardness, profilometer, scanning electron microscopy and X-ray diffraction. The wear tests using a pin-on-disc machine showed that the induction hardening treatments improved the wear behaviour of AISI 4140 steel specimens compared to normalised AISI 4140 steel as a result of residual stresses and hardened surfaces. The wear coefficients in normalised specimens are greater than that in the induction hardened samples. The lowest coefficient of the friction was obtained in specimens induction-hardened at 875 deg. C for 27 s

  13. The effects of induction hardening on wear properties of AISI 4140 steel in dry sliding conditions

    Energy Technology Data Exchange (ETDEWEB)

    Totik, Y.; Sadeler, R.; Altun, H.; Gavgali, M

    2002-02-15

    Wear behaviour of induction hardened AISI 4140 steel was evaluated under dry sliding conditions. Specimens were induction hardened at 1000 Hz for 6, 10, 14, 18, 27 s, respectively, in the inductor which was a three-turn coil with a coupling distance of 2.8 mm. Normalised and induction hardened specimens were fully characterised before and after the wear testing using hardness, profilometer, scanning electron microscopy and X-ray diffraction. The wear tests using a pin-on-disc machine showed that the induction hardening treatments improved the wear behaviour of AISI 4140 steel specimens compared to normalised AISI 4140 steel as a result of residual stresses and hardened surfaces. The wear coefficients in normalised specimens are greater than that in the induction hardened samples. The lowest coefficient of the friction was obtained in specimens induction-hardened at 875 deg. C for 27 s.

  14. Normalised radionuclide measures of left ventricular diastolic function

    International Nuclear Information System (INIS)

    Lee, K.J.; Southee, A.E.; Bautovich, G.J.; Freedman, B.; McLaughlin, A.F.; Rossleigh, M.A.; Hutton, B.F.; Morris, J.G.; Royal Prince Alfred Hospital, Sydney

    1989-01-01

    Abnormal left ventricular diastolic function is being increasingly recognised in patients with clinical heart failure and normal systolic function. A simple routine radionuclide measure of diastolic function would therefore be useful. To establish, the relationship of peak diastolic filling rate (normalized for either end diastolic volume, stroke volume, or peak systolic emptying rate), and heart rate, age, and left ventricular ejection fraction was studied in 64 subjects with normal cardiovascular systems using routine gated heart pool studies. The peak filling rate when normalized to end diastolic volume correlated significantly with heart rate, age and left ventricular ejection fraction, whereas normalization to stroke volume correlated significantly to heart rate and age but not to left ventricular ejection fraction. Peak filling rate normalized for peak systolic emptying rate correlated with age only. Multiple regression equations were determined for each of the normalized peak filling rates in order to establish normal ranges for each parameter. When using peak filling rate normalized for end diastolic volume or stroke volume, appropriate allowance must be made for heart rate, age and ejection fraction. Peak filling rate normalized to peak ejection rate is a heart rate independent parameter which allows the performance of the patient's ventricle in diastole to be compared with its systolic function. It may be used in patients with normal systolic function to serially follow diastolic function, or if age corrected to screen for diastolic dysfunction. (orig.)

  15. (NDSI) and Normalised Difference Principal Component Snow Index

    African Journals Online (AJOL)

    Phila Sibandze

    According to Bonan (2002), snow plays a significant role in influencing heat regimes and local, regional ... sensitive indicator to climate change. In South Africa, snow is .... This image was captured on the earliest cloud free day after a snow fall.

  16. Elevated international normalised ratios correlate with severity of ...

    African Journals Online (AJOL)

    Methods. Study design. The study was approved by the local ethics review board (Biomedical ... 1 Department of General Surgery, School of Clinical Medicine, College of Health Sciences, Nelson R ..... identifying optimal overall cut-off values for ... epidemiology, clinical presentations, and therapeutic considerations.

  17. From Being Non-Judgemental to Deconstructing Normalising Judgement

    Science.gov (United States)

    Winslade, John M.

    2013-01-01

    Beginning with Carl Rogers' exhortation for counsellors to be non-judgemental of their clients, this article explores the rationale for withholding judgement in therapy, including diagnostic judgement. It traces Rogers' incipient sociopolitical analysis as a foundation for this ethic and argues that Michel Foucault provides a stronger…

  18. Conduction mechanism studies on electron transfer of disordered system

    Institute of Scientific and Technical Information of China (English)

    徐慧; 宋祎璞; 李新梅

    2002-01-01

    Using the negative eigenvalue theory and the infinite order perturbation theory, a new method was developed to solve the eigenvectors of disordered systems. The result shows that eigenvectors change from the extended state to the localized state with the increase of the site points and the disordered degree of the system. When electric field is exerted, the electrons transfer from one localized state to another one. The conductivity is induced by the electron transfer. The authors derive the formula of electron conductivity and find the electron hops between localized states whose energies are close to each other, whereas localized positions differ from each other greatly. At low temperature the disordered system has the character of the negative differential dependence of resistivity and temperature.

  19. Symmetries of the second-difference matrix and the finite Fourier transform

    International Nuclear Information System (INIS)

    Aguilar, A.; Wolf, K.B.

    1979-01-01

    The finite Fourier transformation is well known to diagonalize the second-difference matrix and has been thus applied extensively to describe finite crystal lattices and electric networks. In setting out to find all transformations having this property, we obtain a multiparameter class of them. While permutations and unitary scaling of the eigenvectors constitute the trivial freedom of choice common to all diagonalization processes, the second-difference matrix has a larger symmetry group among whose elements we find the dihedral manifest symmetry transformations of the lattice. The latter are nevertheless sufficient for the unique specification of eigenvectors in various symmetry-adapted bases for the constrained lattice. The free symmetry parameters are shown to lead to a complete set of conserved quantities for the physical lattice motion. (author)

  20. Calculation of degenerated Eigenmodes with modified power method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Peng; Lee, Hyun Suk; Lee, Deok Jung [School of Mechanical and Nuclear Engineering, Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2017-02-15

    The modified power method has been studied by many researchers to calculate the higher Eigenmodes and accelerate the convergence of the fundamental mode. Its application to multidimensional problems may be unstable due to degenerated or near-degenerated Eigenmodes. Complex Eigenmode solutions are occasionally encountered in such cases, and the shapes of the corresponding eigenvectors may change during the simulation. These issues must be addressed for the successful implementation of the modified power method. Complex components are examined and an approximation method to eliminate the usage of the complex numbers is provided. A technique to fix the eigenvector shapes is also provided. The performance of the methods for dealing with those aforementioned problems is demonstrated with two dimensional one group and three dimensional one group homogeneous diffusion problems.

  1. On the structure of acceleration in turbulence

    DEFF Research Database (Denmark)

    Liberzon, A.; Lüthi, B.; Holzner, M.

    2012-01-01

    Acceleration and spatial velocity gradients are obtained simultaneously in an isotropic turbulent flow via three dimensional particle tracking velocimetry. We observe two distinct populations of intense acceleration events: one in flow regions of strong strain and another in regions of strong...... vorticity. Geometrical alignments with respect to vorticity vector and to the strain eigenvectors, curvature of Lagrangian trajectories and of streamlines for total acceleration, and for its convective part, , are studied in detail. We discriminate the alignment features of total and convective acceleration...... statistics, which are genuine features of turbulent nature from those of kinematic nature. We find pronounced alignment of acceleration with vorticity. Similarly, and especially are predominantly aligned at 45°with the most stretching and compressing eigenvectors of the rate of the strain tensor...

  2. Tailoring three-point functions and integrability IV. Θ-morphism

    Energy Technology Data Exchange (ETDEWEB)

    Gromov, Nikolay [Department of Mathematics WC2R 2LS, King’s College London,London (United Kingdom); St. Petersburg INP,St. Petersburg (Russian Federation); Vieira, Pedro [Perimeter Institute for Theoretical Physics,Waterloo, Ontario N2L 2Y5 (Canada)

    2014-04-09

    We compute structure constants in N=4 SYM at one loop using Integrability. This requires having full control over the two loop eigenvectors of the dilatation operator for operators of arbitrary size. To achieve this, we develop an algebraic description called the Θ-morphism. In this approach we introduce impurities at each spin chain site, act with particular differential operators on the standard algebraic Bethe ansatz vectors and generate in this way higher loop eigenvectors. The final results for the structure constants take a surprisingly simple form, recently reported by us in the short note http://arxiv.org/abs/1202.4103. These are based on the tree level and one loop patterns together and also on some higher loop experiments involving simple operators.

  3. A Slater parameter optimisation interface for the CIV3 atomic structure code and its possible use with the R-matrix close coupling collision code

    International Nuclear Information System (INIS)

    Fawcett, B.C.; Hibbert, A.

    1989-11-01

    Details are here provided of amendments to the atomic structure code CIV3 which allow the optional adjustment of Slater parameters and average energies of configurations so that they result in improved energy levels and eigenvectors. It is also indicated how, in principle, the resultant improved eigenvectors can be utilised by the R-matrix collision code, thus providing an optimised target for close coupling collision strength calculations. An analogous computational method was recently reported for distorted wave collision strength calculations and applied to Fe XIII. The general method is suitable for the computation of collision strengths for complex ions and in some cases can then provide a basis for collision strength calculations in ions where ab initio computations break down or result in unnecessarily large errors. (author)

  4. Comparison of (e,2e), photoelectron and conventional spectroscopies for the Ar2 ion

    International Nuclear Information System (INIS)

    McCarthy, I.E.; Uylings, P.; Poppe, R.

    1978-05-01

    States of the Ar2 ion whose eigenvectors contain large components of single-hole configurations are observed in the (e,2e) and (γ,e) reactions on the Ar1 atom. The cross section is regarded as being proportional to the spectroscopic factor, that is the state expectation value of the single-hole configuration in the eigenvector. State expectation values obtained from these reactions for 1/2 + states are compared with ones obtained by diagonalizing an effective hamiltonian in a model space, with radial matrix elememts determined by fitting spectra for bound states. (e,2e) and conventional spectroscopy are compatible and provide complementary information about structure. Simple analysis of present (γ,e) data does not lead to compatible information on spectroscopic factors

  5. Algorithm 589. SICEDR: a FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues

    International Nuclear Information System (INIS)

    Dongarra, J.J.

    1982-01-01

    SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector. It is first used to generate information during the determination of the eigenvalues by the Schur decomposition technique. In particular, the Schur decomposition technique results in an orthogonal matrix Q and an upper quasi-triangular matrix T, such that A = QTQ/sup T/. Matrices A, Q, and T and the approximate eigenvalue, say lambda, are then used in the improvement phase. SICEDR uses an iterative method similar to iterative improvement for linear systems to improve the accuracy of lambda and improve or compute the eigenvector x in O(n 2 ) work, where n is the order of the matrix A

  6. Tailoring three-point functions and integrability IV. Θ-morphism

    International Nuclear Information System (INIS)

    Gromov, Nikolay; Vieira, Pedro

    2014-01-01

    We compute structure constants in N=4 SYM at one loop using Integrability. This requires having full control over the two loop eigenvectors of the dilatation operator for operators of arbitrary size. To achieve this, we develop an algebraic description called the Θ-morphism. In this approach we introduce impurities at each spin chain site, act with particular differential operators on the standard algebraic Bethe ansatz vectors and generate in this way higher loop eigenvectors. The final results for the structure constants take a surprisingly simple form, recently reported by us in the short note http://arxiv.org/abs/1202.4103. These are based on the tree level and one loop patterns together and also on some higher loop experiments involving simple operators

  7. Linear algebra

    CERN Document Server

    Berberian, Sterling K

    2014-01-01

    Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.

  8. (WRFDA) for WRF non-hydrostatic mesoscale model

    Indian Academy of Sciences (India)

    Sujata Pattanayak

    2018-05-22

    May 22, 2018 ... Keywords. WRF-NMM; WRFDA; single observation test; eigenvalues; eigenvector; correlation; tropical .... The per- turbation variables here are defined as deviations ..... Synop, Sound, Metar, Pilot, Buoy, Ships, Airep,. Geoamv ...

  9. Spectral segmentation of polygonized images with normalized cuts

    Energy Technology Data Exchange (ETDEWEB)

    Matsekh, Anna [Los Alamos National Laboratory; Skurikhin, Alexei [Los Alamos National Laboratory; Rosten, Edward [UNIV OF CAMBRIDGE

    2009-01-01

    We analyze numerical behavior of the eigenvectors corresponding to the lowest eigenvalues of the generalized graph Laplacians arising in the Normalized Cuts formulations of the image segmentation problem on coarse polygonal grids.

  10. Blood glucose control in healthy subject and patients receiving intravenous glucose infusion or total parenteral nutrition using glucagon-like peptide 1

    DEFF Research Database (Denmark)

    Nauck, Michael A; Walberg, Jörg; Vethacke, Arndt

    2004-01-01

    It was the aim of the study to examine whether the insulinotropic gut hormone GLP-1 is able to control or even normalise glycaemia in healthy subjects receiving intravenous glucose infusions and in severely ill patients hyperglycaemic during total parenteral nutrition.......It was the aim of the study to examine whether the insulinotropic gut hormone GLP-1 is able to control or even normalise glycaemia in healthy subjects receiving intravenous glucose infusions and in severely ill patients hyperglycaemic during total parenteral nutrition....

  11. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform

    Science.gov (United States)

    Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah

    2017-02-01

    Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.

  12. Exact solution of corner-modified banded block-Toeplitz eigensystems

    International Nuclear Information System (INIS)

    Cobanera, Emilio; Alase, Abhijeet; Viola, Lorenza; Ortiz, Gerardo

    2017-01-01

    Motivated by the challenge of seeking a rigorous foundation for the bulk-boundary correspondence for free fermions, we introduce an algorithm for determining exactly the spectrum and a generalized-eigenvector basis of a class of banded block quasi-Toeplitz matrices that we call corner-modified . Corner modifications of otherwise arbitrary banded block-Toeplitz matrices capture the effect of boundary conditions and the associated breakdown of translational invariance. Our algorithm leverages the interplay between a non-standard, projector-based method of kernel determination (physically, a bulk-boundary separation) and families of linear representations of the algebra of matrix Laurent polynomials. Thanks to the fact that these representations act on infinite-dimensional carrier spaces in which translation symmetry is restored, it becomes possible to determine the eigensystem of an auxiliary projected block-Laurent matrix. This results in an analytic eigenvector Ansatz , independent of the system size, which we prove is guaranteed to contain the full solution of the original finite-dimensional problem. The actual solution is then obtained by imposing compatibility with a boundary matrix , whose shape is also independent of system size. As an application, we show analytically that eigenvectors of short-ranged fermionic tight-binding models may display power-law corrections to exponential behavior, and demonstrate the phenomenon for the paradigmatic Majorana chain of Kitaev. (paper)

  13. Imaging Arterial Fibres Using Diffusion Tensor Imaging—Feasibility Study and Preliminary Results

    Directory of Open Access Journals (Sweden)

    Ciaran K. Simms

    2010-01-01

    Full Text Available MR diffusion tensor imaging (DTI was used to analyze the fibrous structure of aortic tissue. A fresh porcine aorta was imaged at 7T using a spin echo sequence with the following parameters: matrix 128 × 128 pixel; slice thickness 0.5 mm; interslice spacing 0.1 mm; number of slices 16; echo time 20.3 s; field of view 28 mm × 28 mm. Eigenvectors from the diffusion tensor images were calculated for the central image slice and the averaged tensors and the eigenvector corresponding to the largest eigenvalue showed two distinct angles corresponding to near 0∘ and 180∘ to the transverse plane of the aorta. Fibre tractography within the aortic volume imaged confirmed that fibre angles were oriented helically with lead angles of 15±2.5∘ and 175±2.5∘. The findings correspond to current histological and microscopy data on the fibrous structure of aortic tissue, and therefore the eigenvector maps and fibre tractography appear to reflect the alignment of the fibers in the aorta. In view of current efforts to develop noninvasive diagnostic tools for cardiovascular diseases, DTI may offer a technique to assess the structural properties of arterial tissue and hence any changes or degradation in arterial tissue.

  14. Possibility of modifying the growth trajectory in Raeini Cashmere goat.

    Science.gov (United States)

    Ghiasi, Heydar; Mokhtari, M S

    2018-03-27

    The objective of this study was to investigate the possibility of modifying the growth trajectory in Raeini Cashmere goat breed. In total, 13,193 records on live body weight collected from 4788 Raeini Cashmere goats were used. According to Akanke's information criterion (AIC), the sing-trait random regression model included fourth-order Legendre polynomial for direct and maternal genetic effect; maternal and individual permanent environmental effect was the best model for estimating (co)variance components. The matrices of eigenvectors for (co)variances between random regression coefficients of direct additive genetic were used to calculate eigenfunctions, and different eigenvector indices were also constructed. The obtained results showed that the first eigenvalue explained 79.90% of total genetic variance. Therefore, changing the body weights applying the first eigenfunction will be obtained rapidly. Selection based on the first eigenvector will cause favorable positive genetic gains for all body weight considered from birth to 12 months of age. For modifying the growth trajectory in Raeini Cashmere goat, the selection should be based on the second eigenfunction. The second eigenvalue accounted for 14.41% of total genetic variance for body weights that is low in comparison with genetic variance explained by the first eigenvalue. The complex patterns of genetic change in growth trajectory observed under the third and fourth eigenfunction and low amount of genetic variance explained by the third and fourth eigenvalues.

  15. Detecting, anticipating, and predicting critical transitions in spatially extended systems.

    Science.gov (United States)

    Kwasniok, Frank

    2018-03-01

    A data-driven linear framework for detecting, anticipating, and predicting incipient bifurcations in spatially extended systems based on principal oscillation pattern (POP) analysis is discussed. The dynamics are assumed to be governed by a system of linear stochastic differential equations which is estimated from the data. The principal modes of the system together with corresponding decay or growth rates and oscillation frequencies are extracted as the eigenvectors and eigenvalues of the system matrix. The method can be applied to stationary datasets to identify the least stable modes and assess the proximity to instability; it can also be applied to nonstationary datasets using a sliding window approach to track the changing eigenvalues and eigenvectors of the system. As a further step, a genuinely nonstationary POP analysis is introduced. Here, the system matrix of the linear stochastic model is time-dependent, allowing for extrapolation and prediction of instabilities beyond the learning data window. The methods are demonstrated and explored using the one-dimensional Swift-Hohenberg equation as an example, focusing on the dynamics of stochastic fluctuations around the homogeneous stable state prior to the first bifurcation. The POP-based techniques are able to extract and track the least stable eigenvalues and eigenvectors of the system; the nonstationary POP analysis successfully predicts the timing of the first instability and the unstable mode well beyond the learning data window.

  16. Identifying the structure of group correlation in the Korean financial market

    Science.gov (United States)

    Ahn, Sanghyun; Choi, Jaewon; Lim, Gyuchang; Cha, Kil Young; Kim, Sooyong; Kim, Kyungsik

    2011-06-01

    We investigate the structure of the cross-correlation in the Korean stock market. We analyze daily cross-correlations between price fluctuations of 586 different Korean stock entities for the 6-year time period from 2003 to 2008. The main purpose is to investigate the structure of group correlation and its stability by undressing the market-wide effect using the Markowitz multi-factor model and the network-based approach. We find the explicit list of significant firms in the few largest eigenvectors from the undressed correlation matrix. We also observe that each contributor is involved in the same business sectors. The structure of group correlation can not remain constant during each 1-year time period with different starting points, whereas only two largest eigenvectors are stable for 6 years 8-9 eigenvectors remain stable for half-year. The structure of group correlation in the Korean financial market is disturbed during a sufficiently short time period even though the group correlation exists as an ensemble for the 6-year time period in the evolution of the system. We verify the structure of group correlation by applying a network-based approach. In addition, we examine relations between market capitalization and businesses. The Korean stock market shows a different behavior compared to mature markets, implying that the KOSPI is a target for short-positioned investors.

  17. Matrices and transformations

    CERN Document Server

    Pettofrezzo, Anthony J

    1978-01-01

    Elementary, concrete approach: fundamentals of matrix algebra, linear transformation of the plane, application of properties of eigenvalues and eigenvectors to study of conics. Includes proofs of most theorems. Answers to odd-numbered exercises.

  18. Normal levels of total body sodium and chlorine by neutron activation analysis

    International Nuclear Information System (INIS)

    Kennedy, N.S.J.; Eastell, R.; Smith, M.A.; Tothill, P.

    1983-01-01

    In vivo neutron activation analysis was used to measure total body sodium and chlorine in 18 male and 18 female normal adults. Corrections for body size were developed. Normalisation factors were derived which enable the prediction of the normal levels of sodium and chlorine in a subject. The coefficient of variation of normalised sodium was 5.9% in men and 6.9% in women, and of normalised chlorine 9.3% in men and 5.5% in women. In the range examined (40-70 years) no significant age dependence was observed for either element. Total body sodium was correlated with total body chlorine and total body calcium. Sodium excess, defined as the amount of body sodium in excess of that associated with chlorine, also correlated well with total body calcium. In females there was a mean annual loss of sodium excess of 1.2% after the menopause, similar to the loss of calcium. (author)

  19. MicroRNA Expression Profiling to Identify and Validate Reference Genes for the Relative Quantification of microRNA in Rectal Cancer

    DEFF Research Database (Denmark)

    Eriksen, Anne Haahr Mellergaard; Andersen, Rikke Fredslund; Pallisgaard, Niels

    2016-01-01

    the miRNA profiling experiment, miR-645, miR-193a-5p, miR-27a and let-7g were identified as stably expressed, both in malignant and stromal tissue. In addition, NormFinder confirmed high expression stability for the four miRNAs. In the RT-qPCR based validation experiments, no significant difference...... management. Real-time quantitative polymerase chain reaction (RT-qPCR) is commonly used, when measuring miRNA expression. Appropriate normalisation of RT-qPCR data is important to ensure reliable results. The aim of the present study was to identify stably expressed miRNAs applicable as normaliser candidates...... in future studies of miRNA expression in rectal cancer.MATERIALS AND METHODS: We performed high-throughput miRNA profiling (OpenArray®) on ten pairs of laser micro-dissected rectal cancer tissue and adjacent stroma. A global mean expression normalisation strategy was applied to identify the most stably...

  20. A reference frame for blood volume in children and adolescents

    Directory of Open Access Journals (Sweden)

    Donckerwolcke Raymond

    2006-02-01

    Full Text Available Abstract Background Our primary purpose was to determine the normal range and variability of blood volume (BV in healthy children, in order to provide reference values during childhood and adolescence. Our secondary aim was to correlate these vascular volumes to body size parameters and pubertal stages, in order to determine the best normalisation parameter. Methods Plasma volume (PV and red cell volume (RCV were measured and F-cell ratio was calculated in 77 children with idiopathic nephrotic syndrome in drug-free remission (mean age, 9.8 ± 4.6 y. BV was calculated as the sum of PV and RCV. Due to the dependence of these values on age, size and sex, all data were normalised for body size parameters. Results BV normalised for lean body mass (LBM did not differ significantly by sex (p Conclusion LBM was the anthropometric index most closely correlated to vascular fluid volumes, independent of age, gender and pubertal stage.

  1. "For me it's just normal" - Strategies of children and young people from rainbow families against de-normalization. The case of Slovenia and Germany.

    Directory of Open Access Journals (Sweden)

    Uli Streib Brzič

    2013-01-01

    Full Text Available The paper presents some findings from an international study called “School is out – Experiences of children from rainbow families in school” which explored how children and young people from rainbow families anticipate, experience and deal with schools as heteronormative spaces. In the research, the term de-normalisation was developed to describe the processes by which children with LGBT-identified parents are perceived and constructed as not normal, as classified beyond the ‘hetero-normative normality’, which is expressed through ‘othering’ by others, for example in interaction. To avoid, prevent or reduce the impacts of de-normalisation processes, the interviewed children and youth have developed different strategies which we present in two frames: one involving disclosure and concealment and the other involving verbalisations and justifications. Based on these insights and findings, the article also outlines ideas on the resilience factors against de-normalisation and emphasises the importance of children and youth not standing alone against it.

  2. Use of eigenvectors in the solution of the flutter equation

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    1993-07-01

    Full Text Available stream_source_info Van Zyl_1993.pdf.txt stream_content_type text/plain stream_size 2 Content-Encoding ISO-8859-1 stream_name Van Zyl_1993.pdf.txt Content-Type text/plain; charset=ISO-8859-1 ...

  3. Morphological covariance in anatomical MRI scans can identify discrete neural pathways in the brain and their disturbances in persons with neuropsychiatric disorders.

    Science.gov (United States)

    Bansal, Ravi; Hao, Xuejun; Peterson, Bradley S

    2015-05-01

    We hypothesize that coordinated functional activity within discrete neural circuits induces morphological organization and plasticity within those circuits. Identifying regions of morphological covariation that are independent of morphological covariation in other regions therefore may therefore allow us to identify discrete neural systems within the brain. Comparing the magnitude of these variations in individuals who have psychiatric disorders with the magnitude of variations in healthy controls may allow us to identify aberrant neural pathways in psychiatric illnesses. We measured surface morphological features by applying nonlinear, high-dimensional warping algorithms to manually defined brain regions. We transferred those measures onto the surface of a unit sphere via conformal mapping and then used spherical wavelets and their scaling coefficients to simplify the data structure representing these surface morphological features of each brain region. We used principal component analysis (PCA) to calculate covariation in these morphological measures, as represented by their scaling coefficients, across several brain regions. We then assessed whether brain subregions that covaried in morphology, as identified by large eigenvalues in the PCA, identified specific neural pathways of the brain. To do so, we spatially registered the subnuclei for each eigenvector into the coordinate space of a Diffusion Tensor Imaging dataset; we used these subnuclei as seed regions to track and compare fiber pathways with known fiber pathways identified in neuroanatomical atlases. We applied these procedures to anatomical MRI data in a cohort of 82 healthy participants (42 children, 18 males, age 10.5 ± 2.43 years; 40 adults, 22 males, age 32.42 ± 10.7 years) and 107 participants with Tourette's Syndrome (TS) (71 children, 59 males, age 11.19 ± 2.2 years; 36 adults, 21 males, age 37.34 ± 10.9 years). We evaluated the construct validity of the identified covariation in morphology

  4. Complex correlation approach for high frequency financial data

    Science.gov (United States)

    Wilinski, Mateusz; Ikeda, Yuichi; Aoyama, Hideaki

    2018-02-01

    We propose a novel approach that allows the calculation of a Hilbert transform based complex correlation for unevenly spaced data. This method is especially suitable for high frequency trading data, which are of a particular interest in finance. Its most important feature is the ability to take into account lead-lag relations on different scales, without knowing them in advance. We also present results obtained with this approach while working on Tokyo Stock Exchange intraday quotations. We show that individual sectors and subsectors tend to form important market components which may follow each other with small but significant delays. These components may be recognized by analysing eigenvectors of complex correlation matrix for Nikkei 225 stocks. Interestingly, sectorial components are also found in eigenvectors corresponding to the bulk eigenvalues, traditionally treated as noise.

  5. High values of disorder-generated multifractals and logarithmically correlated processes

    International Nuclear Information System (INIS)

    Fyodorov, Yan V.; Giraud, Olivier

    2015-01-01

    In the introductory section of the article we give a brief account of recent insights into statistics of high and extreme values of disorder-generated multifractals following a recent work by the first author with P. Le Doussal and A. Rosso (FLR) employing a close relation between multifractality and logarithmically correlated random fields. We then substantiate some aspects of the FLR approach analytically for multifractal eigenvectors in the Ruijsenaars–Schneider ensemble (RSE) of random matrices introduced by E. Bogomolny and the second author by providing an ab initio calculation that reveals hidden logarithmic correlations at the background of the disorder-generated multifractality. In the rest we investigate numerically a few representative models of that class, including the study of the highest component of multifractal eigenvectors in the Ruijsenaars–Schneider ensemble

  6. Nature of complex time eigenvalues of the one speed transport equation in a homogeneous sphere

    International Nuclear Information System (INIS)

    Dahl, E.B.; Sahni, D.C.

    1990-01-01

    The complex time eigenvalues of the transport equation have been studied for one speed neutrons, scattered isotropically in a homogeneous sphere with vacuum boundary conditions. It is shown that the complex decay constants vary continuously with the radius of the sphere. Our earlier conjecture (Dahl and Sahni (1983-84)) regarding disjoint arcs is thus shown to be true. We also indicate that complex decay constants exist even for large assemblies, though with rapid oscillations in the corresponding eigenvectors. These modes cannot be predicted by the diffusion equation as this behaviour of the eigenvectors contradicts the assumption of 'slowly varying flux' needed to derive the diffusion approximation from the transport equation. For an infinite system, the existence of complex modes is related to the solution of a homogeneous equation. (author)

  7. Parameter estimation for an expanding universe

    Directory of Open Access Journals (Sweden)

    Jieci Wang

    2015-03-01

    Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.

  8. Technical normalization in the geoinformatics branch

    Directory of Open Access Journals (Sweden)

    Bronislava Horáková

    2006-09-01

    Full Text Available A basic principle of the technical normalisation is to hold the market development by developing unified technical rules for all concerned subjects. The information and communication technological industry is characterised by certain specific features contrary to the traditional industry. These features bring to the normalisation domain new demands, mainly the flexibility enabling to reflect the rapidly development market of ICT elastic way. The goal of the paper is to provide a comprehensive overview of the current process of technical normalization in the geoinformatic branch

  9. Mobility in Learning: The Feasibility of Encouraging Language Learning on Smartphones

    Directory of Open Access Journals (Sweden)

    Keith Barrs

    2011-09-01

    Full Text Available With normalised technology in language learning contexts there is an unprecedented opportunity to re-define the nature of learning. Traditional ideas of classroom-based learning are giving way to modern ideas of ‘24/7 anywhere, anytime’ learning which is accessed and managed in part or in whole by the learners themselves, primarily on mobile devices. This is a "work in progress" article detailing the initial stages of a study investigating normalisation of smart phones in a language classroom in Japan.

  10. Accuracy of apparent diffusion coefficient in differentiating pancreatic neuroendocrine tumour from intrapancreatic accessory spleen

    International Nuclear Information System (INIS)

    Pandey, Ankur; Pandey, Pallavi; Ghasabeh, Mounes Aliyari; Varzaneh, Farnaz Najmi; Khoshpouri, Pegah; Shao, Nannan; Pour, Manijeh Zargham; Fouladi, Daniel Fadaei; Kamel, Ihab R.; Hruban, Ralph H.; O'Broin-Lennon, Anne Marie

    2018-01-01

    To evaluate and compare the accuracy of absolute apparent diffusion coefficient (ADC) and normalised ADC (lesion-to-spleen ADC ratio) in differentiating pancreatic neuroendocrine tumour (NET) from intrapancreatic accessory spleen (IPAS). Study included 62 patients with the diagnosis of pancreatic NET (n=51) or IPAS (n=11). Two independent reviewers measured ADC on all lesions and spleen. Receiver operating characteristics (ROC) analysis to differentiate NET from IPAS was performed and compared for absolute and normalised ADC. Inter-reader reliability for the two methods was assessed. Pancreatic NET had significantly higher absolute ADC (1.431 x 10 -3 vs 0.967 x 10 -3 mm 2 /s; P<0.0001) and normalised ADC (1.59 vs 1.09; P<0.0001) compared to IPAS. An ADC value of ≥1.206 x 10 -3 mm 2 /s was 70.6% sensitive and 90.9% specific for the diagnosis of NET vs. IPAS. Lesion to spleen ADC ratio of ≥1.25 was 80.4% sensitive, and 81.8% specific while ratio of ≥1.29 was 74.5% sensitive and 100% specific in the differentiation. The area under the curve (AUCs) for two methods were similar (88.2% vs. 88.8%; P=0.899). Both methods demonstrated excellent inter-reader reliability with ICCs for absolute ADC and ADC ratio being 0.957 and 0.927, respectively. Both absolute and normalised ADC allow clinically relevant differentiation of pancreatic NET and IPAS. (orig.)

  11. Impact of particle density and initial volume on mathematical compression models

    DEFF Research Database (Denmark)

    Sonnergaard, Jørn

    2000-01-01

    In the calculation of the coefficients of compression models for powders either the initial volume or the particle density is introduced as a normalising factor. The influence of these normalising factors is, however, widely different on coefficients derived from the Kawakita, Walker and Heckel...... equations. The problems are illustrated by investigations on compaction profiles of 17 materials with different molecular structures and particle densities. It is shown that the particle density of materials with covalent bonds in the Heckel model acts as a key parameter with a dominating influence...

  12. Transitional Justice

    DEFF Research Database (Denmark)

    Gissel, Line Engbo

    This presentation builds on an earlier published article, 'Contemporary Transitional Justice: Normalising a Politics of Exception'. It argues that the field of transitional justice has undergone a shift in conceptualisation and hence practice. Transitional justice is presently understood to be th...... to be the provision of ordinary criminal justice in contexts of exceptional political transition.......This presentation builds on an earlier published article, 'Contemporary Transitional Justice: Normalising a Politics of Exception'. It argues that the field of transitional justice has undergone a shift in conceptualisation and hence practice. Transitional justice is presently understood...

  13. COPDIRC - calculation of particle deposition in reactor coolants

    International Nuclear Information System (INIS)

    Reeks, M.W.

    1982-06-01

    A description is given of a computer code COPDIRC intended for the calculation of the deposition of particulate onto smooth perfectly sticky surfaces in a gas cooled reactor coolant. The deposition is assumed to be limited by transport in the boundary layer adjacent to the depositing surface. This implies that the deposition velocity normalised with respect to the local friction velocity, is an almost universal function of the normalised particle relaxation time. Deposition is assumed similar to deposition in an equivalent smooth perfectly absorbing pipe. The deposition is calculated using 2 models. (author)

  14. Bibliometric indicators of young authors in astrophysics

    DEFF Research Database (Denmark)

    Havemann, Frank; Larsen, Birger

    2015-01-01

    We test 16 bibliometric indicators with respect to their validity at the level of the individual researcher by estimating their power to predict later successful researchers. We compare the indicators of a sample of astrophysics researchers who later co-authored highly cited papers before...... their first landmark paper with the distributions of these indicators over a random control group of young authors in astronomy and astrophysics. We find that field and citation-window normalisation substantially improves the predicting power of citation indicators. The sum of citation numbers normalised...

  15. Effect of biomass concentration on methane oxidation activity using mature compost and graphite granules as substrata.

    Science.gov (United States)

    Xie, S; O'Dwyer, T; Freguia, S; Pikaar, I; Clarke, W P

    2016-10-01

    Reported methane oxidation activity (MOA) varies widely for common landfill cover materials. Variation is expected due to differences in surface area, the composition of the substratum and culturing conditions. MOA per methanotrophic cell has been calculated in the study of natural systems such as lake sediments to examine the inherent conditions for methanotrophic activity. In this study, biomass normalised MOA (i.e., MOA per methanotophic cell) was measured on stabilised compost, a commonly used cover in landfills, and on graphite granules, an inert substratum widely used in microbial electrosynthesis studies. After initially enriching methanotrophs on both substrata, biomass normalised MOA was quantified under excess oxygen and limiting methane conditions in 160ml serum vials on both substrata and blends of the substrata. Biomass concentration was measured using the bicinchoninic acid assay for microbial protein. The biomass normalised MOA was consistent across all compost-to-graphite granules blends, but varied with time, reflecting the growth phase of the microorganisms. The biomass normalised MOA ranged from 0.069±0.006μmol CH4/mg dry biomass/h during active growth, to 0.024±0.001μmol CH4/mg dry biomass/h for established biofilms regardless of the substrata employed, indicating the substrata were equally effective in terms of inherent composition. The correlation of MOA with biomass is consistent with studies on methanotrophic activity in natural systems, but biomass normalised MOA varies by over 5 orders of magnitude between studies. This is partially due to different methods being used to quantify biomass, such as pmoA gene quantification and the culture dependent Most Probable Number method, but also indicates that long term exposure of materials to a supply of methane in an aerobic environment, as can occur in natural systems, leads to the enrichment and adaptation of types suitable for those conditions. Copyright © 2016 Elsevier Ltd. All rights

  16. Scale dependence of the alignment between strain rate and rotation in turbulent shear flow

    KAUST Repository

    Fiscaletti, D.

    2016-10-24

    The scale dependence of the statistical alignment tendencies of the eigenvectors of the strain-rate tensor e(i), with the vorticity vector omega, is examined in the self-preserving region of a planar turbulent mixing layer. Data from a direct numerical simulation are filtered at various length scales and the probability density functions of the magnitude of the alignment cosines between the two unit vectors vertical bar e(i) . (omega) over cap vertical bar are examined. It is observed that the alignment tendencies are insensitive to the concurrent large-scale velocity fluctuations, but are quantitatively affected by the nature of the concurrent large-scale velocity-gradient fluctuations. It is confirmed that the small-scale (local) vorticity vector is preferentially aligned in parallel with the large-scale (background) extensive strain-rate eigenvector e(1), in contrast to the global tendency for omega to be aligned in parallelwith the intermediate strain-rate eigenvector [Hamlington et al., Phys. Fluids 20, 111703 (2008)]. When only data from regions of the flow that exhibit strong swirling are included, the so-called high-enstrophy worms, the alignment tendencies are exaggerated with respect to the global picture. These findings support the notion that the production of enstrophy, responsible for a net cascade of turbulent kinetic energy from large scales to small scales, is driven by vorticity stretching due to the preferential parallel alignment between omega and nonlocal e(1) and that the strongly swirling worms are kinematically significant to this process.

  17. Scale dependence of the alignment between strain rate and rotation in turbulent shear flow

    KAUST Repository

    Fiscaletti, D.; Elsinga, G. E.; Attili, Antonio; Bisetti, Fabrizio; Buxton, O. R. H.

    2016-01-01

    The scale dependence of the statistical alignment tendencies of the eigenvectors of the strain-rate tensor e(i), with the vorticity vector omega, is examined in the self-preserving region of a planar turbulent mixing layer. Data from a direct numerical simulation are filtered at various length scales and the probability density functions of the magnitude of the alignment cosines between the two unit vectors vertical bar e(i) . (omega) over cap vertical bar are examined. It is observed that the alignment tendencies are insensitive to the concurrent large-scale velocity fluctuations, but are quantitatively affected by the nature of the concurrent large-scale velocity-gradient fluctuations. It is confirmed that the small-scale (local) vorticity vector is preferentially aligned in parallel with the large-scale (background) extensive strain-rate eigenvector e(1), in contrast to the global tendency for omega to be aligned in parallelwith the intermediate strain-rate eigenvector [Hamlington et al., Phys. Fluids 20, 111703 (2008)]. When only data from regions of the flow that exhibit strong swirling are included, the so-called high-enstrophy worms, the alignment tendencies are exaggerated with respect to the global picture. These findings support the notion that the production of enstrophy, responsible for a net cascade of turbulent kinetic energy from large scales to small scales, is driven by vorticity stretching due to the preferential parallel alignment between omega and nonlocal e(1) and that the strongly swirling worms are kinematically significant to this process.

  18. Impact of Bone Marrow Radiation Dose on Acute Hematologic Toxicity in Cervical Cancer: Principal Component Analysis on High Dimensional Data

    International Nuclear Information System (INIS)

    Yun Liang; Messer, Karen; Rose, Brent S.; Lewis, John H.; Jiang, Steve B.; Yashar, Catheryn M.; Mundt, Arno J.; Mell, Loren K.

    2010-01-01

    Purpose: To study the effects of increasing pelvic bone marrow (BM) radiation dose on acute hematologic toxicity in patients undergoing chemoradiotherapy, using a novel modeling approach to preserve the local spatial dose information. Methods and Materials: The study included 37 cervical cancer patients treated with concurrent weekly cisplatin and pelvic radiation therapy. The white blood cell count nadir during treatment was used as the indicator for acute hematologic toxicity. Pelvic BM radiation dose distributions were standardized across patients by registering the pelvic BM volumes to a common template, followed by dose remapping using deformable image registration, resulting in a dose array. Principal component (PC) analysis was applied to the dose array, and the significant eigenvectors were identified by linear regression on the PCs. The coefficients for PC regression and significant eigenvectors were represented in three dimensions to identify critical BM subregions where dose accumulation is associated with hematologic toxicity. Results: We identified five PCs associated with acute hematologic toxicity. PC analysis regression modeling explained a high proportion of the variation in acute hematologicity (adjusted R 2 , 0.49). Three-dimensional rendering of a linear combination of the significant eigenvectors revealed patterns consistent with anatomical distributions of hematopoietically active BM. Conclusions: We have developed a novel approach that preserves spatial dose information to model effects of radiation dose on toxicity, which may be useful in optimizing radiation techniques to avoid critical subregions of normal tissues. Further validation of this approach in a large cohort is ongoing.

  19. Perron–Frobenius theorem for nonnegative multilinear forms and extensions

    OpenAIRE

    Friedland, S.; Gaubert, S.; Han, L.

    2013-01-01

    We prove an analog of Perron-Frobenius theorem for multilinear forms with nonnegative coefficients, and more generally, for polynomial maps with nonnegative coefficients. We determine the geometric convergence rate of the power algorithm to the unique normalized eigenvector.

  20. A comparison of Normalised Difference Snow Index (NDSI) and ...

    African Journals Online (AJOL)

    As an alternative, thematic cover–types based on remotely sensed data-sets are becoming popular. In this study we hypothesise that the reduced dimensionality using Principal Components Analysis (PCA) in concert Normalized Difference Snow Index (NDSI) is valuable for improving the accuracy of snow cover maps.

  1. Analysis of Heavy-Tailed Time Series

    DEFF Research Database (Denmark)

    Xie, Xiaolei

    This thesis is about analysis of heavy-tailed time series. We discuss tail properties of real-world equity return series and investigate the possibility that a single tail index is shared by all return series of actively traded equities in a market. Conditions for this hypothesis to be true...... are identified. We study the eigenvalues and eigenvectors of sample covariance and sample auto-covariance matrices of multivariate heavy-tailed time series, and particularly for time series with very high dimensions. Asymptotic approximations of the eigenvalues and eigenvectors of such matrices are found...... and expressed in terms of the parameters of the dependence structure, among others. Furthermore, we study an importance sampling method for estimating rare-event probabilities of multivariate heavy-tailed time series generated by matrix recursion. We show that the proposed algorithm is efficient in the sense...

  2. Deflation for inversion with multiple right-hand sides in QCD

    International Nuclear Information System (INIS)

    Stathopoulos, A; Abdel-Rehim, A M; Orginos, K

    2009-01-01

    Most calculations in lattice Quantum Chromodynamics (QCD) involve the solution of a series of linear systems of equations with exceedingly large matrices and a large number of right hand sides. Iterative methods for these problems can be sped up significantly if we deflate approximations of appropriate invariant spaces from the initial guesses. Recently we have developed eigCG, a modification of the Conjugate Gradient (CG) method, which while solving a linear system can reuse a window of the CG vectors to compute eigenvectors almost as accurately as the Lanczos method. The number of approximate eigenvectors can increase as more systems are solved. In this paper we review some of the characteristics of eigCG and show how it helps remove the critical slowdown in QCD calculations. Moreover, we study scaling with lattice volume and an extension of the technique to nonsymmetric problems.

  3. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  4. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    A significant practical problem with the pilot point method is to choose the location of the pilot points. We present a method that is intended to relieve the modeler from much of this responsibility. The basic idea is that a very large number of pilot points are distributed more or less uniformly...... over the model area. Singular value decomposition (SVD) of the (possibly weighted) sensitivity matrix of the pilot point based model produces eigenvectors of which we pick a small number corresponding to significant eigenvalues. Super parameters are defined as factors through which parameter...... combinations corresponding to the chosen eigenvectors are multiplied to obtain the pilot point values. The model can thus be transformed from having many-pilot-point parameters to having a few super parameters that can be estimated by nonlinear regression on the basis of the available observations. (This...

  5. Cross-correlation matrix analysis of Chinese and American bank stocks in subprime crisis

    International Nuclear Information System (INIS)

    Zhu Shi-Zhao; Li Xin-Li; Zhang Wen-Qing; Wang Bing-Hong; Nie Sen; Yu Gao-Feng; Han Xiao-Pu

    2015-01-01

    In order to study the universality of the interactions among different markets, we analyze the cross-correlation matrix of the price of the Chinese and American bank stocks. We then find that the stock prices of the emerging market are more correlated than that of the developed market. Considering that the values of the components for the eigenvector may be positive or negative, we analyze the differences between two markets in combination with the endogenous and exogenous events which influence the financial markets. We find that the sparse pattern of components of eigenvectors out of the threshold value has no change in American bank stocks before and after the subprime crisis. However, it changes from sparse to dense for Chinese bank stocks. By using the threshold value to exclude the external factors, we simulate the interactions in financial markets. (paper)

  6. Introduction to the mathematics of inversion in remote sensing and indirect measurements

    CERN Document Server

    Twomey, S

    2013-01-01

    Developments in Geomathematics, 3: Introduction to the Mathematics of Inversion in Remote Sensing and Indirect Measurements focuses on the application of the mathematics of inversion in remote sensing and indirect measurements, including vectors and matrices, eigenvalues and eigenvectors, and integral equations. The publication first examines simple problems involving inversion, theory of large linear systems, and physical and geometric aspects of vectors and matrices. Discussions focus on geometrical view of matrix operations, eigenvalues and eigenvectors, matrix products, inverse of a matrix, transposition and rules for product inversion, and algebraic elimination. The manuscript then tackles the algebraic and geometric aspects of functions and function space and linear inversion methods, as well as the algebraic and geometric nature of constrained linear inversion, least squares solution, approximation by sums of functions, and integral equations. The text examines information content of indirect sensing m...

  7. Transfer matrix method for dynamics modeling and independent modal space vibration control design of linear hybrid multibody system

    Science.gov (United States)

    Rong, Bao; Rui, Xiaoting; Lu, Kun; Tao, Ling; Wang, Guoping; Ni, Xiaojun

    2018-05-01

    In this paper, an efficient method of dynamics modeling and vibration control design of a linear hybrid multibody system (MS) is studied based on the transfer matrix method. The natural vibration characteristics of a linear hybrid MS are solved by using low-order transfer equations. Then, by constructing the brand-new body dynamics equation, augmented operator and augmented eigenvector, the orthogonality of augmented eigenvector of a linear hybrid MS is satisfied, and its state space model expressed in each independent model space is obtained easily. According to this dynamics model, a robust independent modal space-fuzzy controller is designed for vibration control of a general MS, and the genetic optimization of some critical control parameters of fuzzy tuners is also presented. Two illustrative examples are performed, which results show that this method is computationally efficient and with perfect control performance.

  8. Follicle vascularity coordinates corpus luteum blood flow and progesterone production.

    Science.gov (United States)

    de Tarso, S G S; Gastal, G D A; Bashir, S T; Gastal, M O; Apgar, G A; Gastal, E L

    2017-03-01

    Colour Doppler ultrasonography was used to compare the ability of preovulatory follicle (POF) blood flow and its dimensions to predict the size, blood flow and progesterone production capability of the subsequent corpus luteum (CL). Cows (n=30) were submitted to a synchronisation protocol. Follicles ≥7mm were measured and follicular wall blood flow evaluated every 12h for approximately 3.5 days until ovulation. After ovulation, cows were scanned daily for 8 days and similar parameters were evaluated for the CL. Blood samples were collected and plasma progesterone concentrations quantified. All parameters were positively correlated. Correlation values ranged from 0.26 to 0.74 on data normalised to ovulation and from 0.31 to 0.74 on data normalised to maximum values. Correlations between calculated ratios of both POF and CL in data normalised to ovulation and to maximum values ranged from moderate (0.57) to strong (0.87). Significant (Pprogesterone concentrations of the resultant CL. These findings indicate that follicle vascularity coordinates CL blood flow and progesterone production in synchronised beef cows.

  9. MLPAinter for MLPA interpretation: an integrated approach for the analysis, visualisation and data management of Multiplex Ligation-dependent Probe Amplification

    Directory of Open Access Journals (Sweden)

    Morreau Hans

    2010-01-01

    Full Text Available Abstract Background Multiplex Ligation-Dependent Probe Amplification (MLPA is an application that can be used for the detection of multiple chromosomal aberrations in a single experiment. In one reaction, up to 50 different genomic sequences can be analysed. For a reliable work-flow, tools are needed for administrative support, data management, normalisation, visualisation, reporting and interpretation. Results Here, we developed a data management system, MLPAInter for MLPA interpretation, that is windows executable and has a stand-alone database for monitoring and interpreting the MLPA data stream that is generated from the experimental setup to analysis, quality control and visualisation. A statistical approach is applied for the normalisation and analysis of large series of MLPA traces, making use of multiple control samples and internal controls. Conclusions MLPAinter visualises MLPA data in plots with information about sample replicates, normalisation settings, and sample characteristics. This integrated approach helps in the automated handling of large series of MLPA data and guarantees a quick and streamlined dataflow from the beginning of an experiment to an authorised report.

  10. Development and validation of a computational model of the knee joint for the evaluation of surgical treatments for osteoarthritis.

    Science.gov (United States)

    Mootanah, R; Imhauser, C W; Reisse, F; Carpanen, D; Walker, R W; Koff, M F; Lenhoff, M W; Rozbruch, S R; Fragomen, A T; Dewan, Z; Kirane, Y M; Cheah, K; Dowell, J K; Hillstrom, H J

    2014-01-01

    A three-dimensional (3D) knee joint computational model was developed and validated to predict knee joint contact forces and pressures for different degrees of malalignment. A 3D computational knee model was created from high-resolution radiological images to emulate passive sagittal rotation (full-extension to 65°-flexion) and weight acceptance. A cadaveric knee mounted on a six-degree-of-freedom robot was subjected to matching boundary and loading conditions. A ligament-tuning process minimised kinematic differences between the robotically loaded cadaver specimen and the finite element (FE) model. The model was validated by measured intra-articular force and pressure measurements. Percent full scale error between FE-predicted and in vitro-measured values in the medial and lateral compartments were 6.67% and 5.94%, respectively, for normalised peak pressure values, and 7.56% and 4.48%, respectively, for normalised force values. The knee model can accurately predict normalised intra-articular pressure and forces for different loading conditions and could be further developed for subject-specific surgical planning.

  11. University of Glasgow at WebCLEF 2005

    DEFF Research Database (Denmark)

    Macdonald, C.; Plachouras, V.; He, B.

    2006-01-01

    We participated in the WebCLEF 2005 monolingual task. In this task, a search system aims to retrieve relevant documents from a multilingual corpus of Web documents from Web sites of European governments. Both the documents and the queries are written in a wide range of European languages......, namely content, title, and anchor text of incoming hyperlinks. We use a technique called per-field normalisation, which extends the Divergence From Randomness (DFR) framework, to normalise the term frequencies, and to combine them across the three fields. We also employ the length of the URL path of Web...

  12. Homogenisation of a Wigner-Seitz cell in two group diffusion theory

    International Nuclear Information System (INIS)

    Allen, F.R.

    1968-02-01

    Two group diffusion theory is used to develop a theory for the homogenisation of a Wigner-Seitz cell, neglecting azimuthal flux components of higher order than dipoles. An iterative method of solution is suggested for linkage with reactor calculations. The limiting theory for no cell leakage leads to cell edge flux normalisation of cell parameters, the current design method for SGHW reactor design calculations. Numerical solutions are presented for a cell-plus-environment model with monopoles only. The results demonstrate the exact theory in comparison with the approximate recipes of normalisation to cell edge, moderator average, or cell average flux levels. (author)

  13. Baroclinic wave configurations evolution at European scale in the period 1948-2013

    Science.gov (United States)

    Carbunaru, Daniel; Burcea, Sorin; Carbunaru, Felicia

    2016-04-01

    The main aim of the study was to investigate the dynamic characteristics of synoptic configurations at European scale and especially in south-eastern part of Europe for the period 1948-2013. Using the empirical orthogonal functions analysis, simultaneously applied to daily average geopotential field at different pressure levels (200 hPa, 300 hPa, 500 hPa and 850 hPa) during warm (April-September) and cold (October-March) seasons, on a synoptic spatial domain centered on Europe (-27.5o lon V to 45o lon E and 32.5o lat N to 72.5o lat N), the main mode of oscillation characteristic to vertical shift of mean baroclinic waves was obtained. The analysis independently applied on 66 years showed that the first eigenvectors in warms periods describe about 60% of the data and in cold season 40% of the data for each year. In comparison secondary eigenvectors describe up to 20% and 10% of the data. Thus, the analysis was focused on the complex evolution of the first eigenvector in 66 years, during the summer period. On average, this eigenvector describes a small vertical phase shift in the west part of the domain and a large one in the eastern part. Because the spatial extent of the considered synoptic domain incorporates in the west part AMO (Atlantic Multidecadal Oscillation) and NAO (North Atlantic Oscillation) oscillations, and in the north part being sensitive to AO (Arctic Oscillation) oscillation, these three oscillations were considered as modulating dynamic factors at hemispherical scale. The preliminary results show that in the summer seasons AMO and NAO oscillations modulated vertical phase shift of baroclinic wave in the west of the area (Northwestern Europe), and the relationship between AO and NAO oscillations modulated vertical phase shift in the southeast area (Southeast Europe). Second, it was shown the way in which this vertical phase shift modulates the overall behavior of cyclonic activity, particularly in Southeastern Europe. This work has been developed

  14. Shock and Rarefaction Waves in a Heterogeneous Mantle

    Science.gov (United States)

    Jordan, J.; Hesse, M. A.

    2012-12-01

    We explore the effect of heterogeneities on partial melting and melt migration during active upwelling in the Earth's mantle. We have constructed simple, explicit nonlinear models in one dimension to examine heterogeneity and its dynamic affects on porosity, temperature and the magnesium number in a partially molten, porous medium comprised of olivine. The composition of the melt and solid are defined by a closed, binary phase diagram for a simplified, two-component olivine system. The two-component solid solution is represented by a phase loop where concentrations 0 and 1 to correspond to fayalite and forsterite, respectively. For analysis, we examine an advective system with a Riemann initial condition. Chromatographic tools and theory have primarily been used to track large, rare earth elements as tracers. In our case, we employ these theoretical tools to highlight the importance of the magnesium number, enthalpy and overall heterogeneity in the dynamics of melt migration. We calculate the eigenvectors and eigenvalues in the concentration-enthalpy space in order to glean the characteristics of the waves emerging the Riemann step. Analysis on Riemann problems of this nature shows us that the composition-enthalpy waves can be represented by self-similar solutions. The eigenvalues of the composition-enthalpy system represent the characteristic wave propagation speeds of the compositions and enthalpy through the domain. Furthermore, the corresponding eigenvectors are the directions of variation, or ``pathways," in concentration-enthalpy space that the characteristic waves follow. In the two-component system, the Riemann problem yields two waves connected by an intermediate concentration-enthalpy state determined by the intersections of the integral curves of the eigenvectors emanating from both the initial and boundary states. The first wave, ``slow path," and second wave, ``fast path," follow the aformentioned pathways set by the eigenvectors. The slow path wave

  15. A classification system for one Killing vector solutions of Einstein's equations

    International Nuclear Information System (INIS)

    Hoenselaers, C.

    1978-01-01

    A double classification system for one Killing vector solutions in terms of the eigenvectors and eigenvalues of the Ricci and Bach tensor of the associated three manifold is proposed. The calculations of the Bach tensor are carried out for special cases. (author)

  16. Fiber crossing in human brain depicted with diffusion tensor MR imaging

    DEFF Research Database (Denmark)

    Wiegell, M.R.; Larsson, H.B.; Wedeen, V.J.

    2000-01-01

    Human white matter fiber crossings were investigated with use of the full eigenstructure of the magnetic resonance diffusion tensor. Intravoxel fiber dispersions were characterized by the plane spanned by the major and medium eigenvectors and depicted with three-dimensional graphics. This method...

  17. Feature extraction for classification in the data mining process

    NARCIS (Netherlands)

    Pechenizkiy, M.; Puuronen, S.; Tsymbal, A.

    2003-01-01

    Dimensionality reduction is a very important step in the data mining process. In this paper, we consider feature extraction for classification tasks as a technique to overcome problems occurring because of "the curse of dimensionality". Three different eigenvector-based feature extraction approaches

  18. Performance Analysis of the Decentralized Eigendecomposition and ESPRIT Algorithm

    Science.gov (United States)

    Suleiman, Wassim; Pesavento, Marius; Zoubir, Abdelhak M.

    2016-05-01

    In this paper, we consider performance analysis of the decentralized power method for the eigendecomposition of the sample covariance matrix based on the averaging consensus protocol. An analytical expression of the second order statistics of the eigenvectors obtained from the decentralized power method which is required for computing the mean square error (MSE) of subspace-based estimators is presented. We show that the decentralized power method is not an asymptotically consistent estimator of the eigenvectors of the true measurement covariance matrix unless the averaging consensus protocol is carried out over an infinitely large number of iterations. Moreover, we introduce the decentralized ESPRIT algorithm which yields fully decentralized direction-of-arrival (DOA) estimates. Based on the performance analysis of the decentralized power method, we derive an analytical expression of the MSE of DOA estimators using the decentralized ESPRIT algorithm. The validity of our asymptotic results is demonstrated by simulations.

  19. A Quantum Implementation Model for Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ammar Daskin

    2018-02-01

    Full Text Available The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase estimation algorithm is known to provide speedups over the conventional algorithms for the eigenvalue-related problems. Combining the quantum amplitude amplification with the phase estimation algorithm, a quantum implementation model for artificial neural networks using the Widrow–Hoff learning rule is presented. The complexity of the model is found to be linear in the size of the weight matrix. This provides a quadratic improvement over the classical algorithms. Quanta 2018; 7: 7–18.

  20. Pore Fluid Effects on Shear Modulus for Sandstones with Soft Anisotropy

    International Nuclear Information System (INIS)

    Berryman, J G

    2004-01-01

    A general analysis of poroelasticity for vertical transverse isotropy (VTI) shows that four eigenvectors are pure shear modes with no coupling to the pore-fluidmechanics. The remaining two eigenvectors are linear combinations of pure compression and uniaxial shear, both of which are coupled to the fluid mechanics. After reducing the problem to a 2x2 system, the analysis shows in a relatively elementary fashion how a poroelastic system with isotropic solid elastic frame, but with anisotropy introduced through the poroelastic coefficients, interacts with the mechanics of the pore fluid and produces shear dependence on fluid properties in the overall mechanical system. The analysis shows, for example, that this effect is always present (though sometimes small in magnitude) in the systems studied, and can be quite large (up to a definite maximum increase of 20 per cent) in some rocks--including Spirit River sandstone and Schuler-Cotton Valley sandstone

  1. Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix

    Science.gov (United States)

    Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia

    2011-03-01

    During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.

  2. Using Solar Radiation Pressure to Control L2 Orbits

    Science.gov (United States)

    Tene, Noam; Richon, Karen; Folta, David

    1998-01-01

    The main perturbations at the Sun-Earth Lagrange points L1 and L2 are from solar radiation pressure (SRP), the Moon and the planets. Traditional approaches to trajectory design for Lagrange-point orbits use maneuvers every few months to correct for these perturbations. The gravitational effects of the Moon and the planets are small and periodic. However, they cannot be neglected because small perturbations in the direction of the unstable eigenvector are enough to cause exponential growth within a few months. The main effect of a constant SRP is to shift the center of the orbit by a small distance. For spacecraft with large sun-shields like the Microwave Anisotropy Probe (MAP) and the Next Generation Space Telescope (NGST), the SRP effect is larger than all other perturbations and depends mostly on spacecraft attitude. Small variations in the spacecraft attitude are large enough to excite or control the exponential eigenvector. A closed-loop linear controller based on the SRP variations would eliminate one of the largest errors to the orbit and provide a continuous acceleration for use in controlling other disturbances. It is possible to design reference trajectories that account for the periodic lunar and planetary perturbations and still satisfy mission requirements. When such trajectories are used the acceleration required to control the unstable eigenvector is well within the capabilities of a continuous linear controller. Initial estimates show that by using attitude control it should be possible to minimize and even eliminate thruster maneuvers for station keeping.

  3. Linear algebra of the permutation invariant Crow-Kimura model of prebiotic evolution.

    Science.gov (United States)

    Bratus, Alexander S; Novozhilov, Artem S; Semenov, Yuri S

    2014-10-01

    A particular case of the famous quasispecies model - the Crow-Kimura model with a permutation invariant fitness landscape - is investigated. Using the fact that the mutation matrix in the case of a permutation invariant fitness landscape has a special tridiagonal form, a change of the basis is suggested such that in the new coordinates a number of analytical results can be obtained. In particular, using the eigenvectors of the mutation matrix as the new basis, we show that the quasispecies distribution approaches a binomial one and give simple estimates for the speed of convergence. Another consequence of the suggested approach is a parametric solution to the system of equations determining the quasispecies. Using this parametric solution we show that our approach leads to exact asymptotic results in some cases, which are not covered by the existing methods. In particular, we are able to present not only the limit behavior of the leading eigenvalue (mean population fitness), but also the exact formulas for the limit quasispecies eigenvector for special cases. For instance, this eigenvector has a geometric distribution in the case of the classical single peaked fitness landscape. On the biological side, we propose a mathematical definition, based on the closeness of the quasispecies to the binomial distribution, which can be used as an operational definition of the notorious error threshold. Using this definition, we suggest two approximate formulas to estimate the critical mutation rate after which the quasispecies delocalization occurs. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. SU-E-I-58: Objective Models of Breast Shape Undergoing Mammography and Tomosynthesis Using Principal Component Analysis.

    Science.gov (United States)

    Feng, Ssj; Sechopoulos, I

    2012-06-01

    To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.

  5. The Faddeev equation and essential spectrum of a Hamiltonian in Fock space

    International Nuclear Information System (INIS)

    Muminov, M.I.; Rasulov, T.H.

    2008-05-01

    A model operator H associated to a quantum system with non conserved number of particles is studied. The Faddeev type system of equation for eigenvectors of H is constructed. The essential spectrum of H is described by the spectrum of the channel operator. (author)

  6. A computational approach for fluid queues driven by truncated birth-death processes.

    NARCIS (Netherlands)

    Lenin, R.B.; Parthasarathy, P.R.

    2000-01-01

    In this paper, we analyze fluid queues driven by truncated birth-death processes with general birth and death rates. We compute the equilibrium distribution of the content of the fluid buffer by providing efficient numerical procedures to compute the eigenvalues and the eigenvectors of the

  7. Random matrix theory and acoustic resonances in plates with an approximate symmetry

    DEFF Research Database (Denmark)

    Andersen, Anders Peter; Ellegaard, C.; Jackson, A.D.

    2001-01-01

    We discuss a random matrix model of systems with an approximate symmetry and present the spectral fluctuation statistics and eigenvector characteristics for the model. An acoustic resonator like, e.g., an aluminum plate may have an approximate symmetry. We have measured the frequency spectrum and...

  8. Sensor scheme design for active structural acoustic control

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    Efficient sensing schemes for the active reduction of sound radiation from plates are presented based on error signals derived from spatially weighted plate velocity or near-field pressure. The schemes result in near-optimal reductions as compared to weighting procedures derived from eigenvector or

  9. A Brief Historical Introduction to Determinants with Applications

    Science.gov (United States)

    Debnath, L.

    2013-01-01

    This article deals with a short historical introduction to determinants with applications to the theory of equations, geometry, multiple integrals, differential equations and linear algebra. Included are some properties of determinants with proofs, eigenvalues, eigenvectors and characteristic equations with examples of applications to simple…

  10. Lineární algebra ukrytá v internetovém vyhledávači Google

    Czech Academy of Sciences Publication Activity Database

    Brandts, J.; Křížek, Michal

    2007-01-01

    Roč. 52, č. 3 (2007), s. 195-204 ISSN 0032-2423 R&D Projects: GA MŠk 1P05ME749 Institutional research plan: CEZ:AV0Z10190503 Keywords : data structures * teleportation matrix * eigenvalues and eigenvectors Subject RIV: BA - General Mathematics

  11. Self-averaging correlation functions in the mean field theory of spin glasses

    International Nuclear Information System (INIS)

    Mezard, M.; Parisi, G.

    1984-01-01

    In the infinite range spin glass model, we consider the staggered spin σsub(lambda)associated with a given eigenvector of the interaction matrix. We show that the thermal average of sub(lambda)sup(2) is a self-averaging quantity and we compute it

  12. Low cadmium exposure in males and lactating females–estimation of biomarkers

    Energy Technology Data Exchange (ETDEWEB)

    Stajnko, Anja [Department of Environmental Sciences, Jožef Stefan Institute, Jamova 39, Ljubljana (Slovenia); Jožef Stefan International Postgraduate School, Jamova 39, Ljubljana (Slovenia); Falnoga, Ingrid, E-mail: ingrid.falnoga@ijs.si [Department of Environmental Sciences, Jožef Stefan Institute, Jamova 39, Ljubljana (Slovenia); Tratnik, Janja Snoj; Mazej, Darja [Department of Environmental Sciences, Jožef Stefan Institute, Jamova 39, Ljubljana (Slovenia); Jagodic, Marta [Department of Environmental Sciences, Jožef Stefan Institute, Jamova 39, Ljubljana (Slovenia); Jožef Stefan International Postgraduate School, Jamova 39, Ljubljana (Slovenia); Krsnik, Mladen [Institute of Clinical Chemistry and Biochemistry, University Medical Centre Ljubljana, Njegoševa 4, Ljubljana (Slovenia); Kobal, Alfred B. [Department of Occupational Health, Idrija Mercury Mine, Arkova 43, Idrija (Slovenia); Prezelj, Marija [Institute of Clinical Chemistry and Biochemistry, University Medical Centre Ljubljana, Njegoševa 4, Ljubljana (Slovenia); Kononenko, Lijana [Chemical Office of RS, Ministry of Health of RS, Ajdovščina 4, Ljubljana (Slovenia); Horvat, Milena [Department of Environmental Sciences, Jožef Stefan Institute, Jamova 39, Ljubljana (Slovenia); Jožef Stefan International Postgraduate School, Jamova 39, Ljubljana (Slovenia)

    2017-01-15

    Background: Urine cadmium (Cd) and renal function biomarkers, mostly analysed in urine spot samples, are well established biomarkers of occupational exposure. Their use and associations at low environmental level are common, but have recently been questioned, particularly in terms of physiological variability and normalisation bias in the case of urine spot samples. Aim: To determine the appropriateness of spot urine and/or blood Cd exposure biomarkers and their relationships with renal function biomarkers at low levels of exposure. To this end, we used data from Slovenian human biomonitoring program involving 1081 Slovenians (548 males, mean age 31 years; 533 lactating females, mean age 29 years; 2007–2015) who have not been exposed to Cd occupationally. Results: Geometric means (GMs) of Cd in blood and spot urine samples were 0.27 ng/mL (0.28 for males and 0.33 for females) and 0.19 ng/mL (0.21 for males and 0.17 for females), respectively. Differing results were obtained when contrasting normalisation by urine creatinine with specific gravity. GMs of urine albumin (Alb), alpha-1-microglobulin (A1M), N-acetyl-beta-glucosaminidase (NAG), and immunoglobulin G (IgG) were far below their upper reference limits. Statistical analysis of unnormalised or normalised urine data often yielded inconsistent and conflicting results (or trends), so association analyses with unnormalised data were taken as more valid. Relatively weak positive associations were observed between urine Cd (ng/mL) and blood Cd (β=0.11, p=0.002 for males and β=0.33, p<0.001 for females) and for females between urine NAG and blood Cd (β=0.14, p=0.04). No associations were found between other renal function biomarkers and blood Cd. Associations between Cd and renal function biomarkers in urine were stronger (p<0.05, β=0.11–0.63). Mostly, all of the associations stayed significant but weakened after normalisation for diuresis. In the case of A1M, its associations with Cd were influenced by

  13. Low cadmium exposure in males and lactating females–estimation of biomarkers

    International Nuclear Information System (INIS)

    Stajnko, Anja; Falnoga, Ingrid; Tratnik, Janja Snoj; Mazej, Darja; Jagodic, Marta; Krsnik, Mladen; Kobal, Alfred B.; Prezelj, Marija; Kononenko, Lijana; Horvat, Milena

    2017-01-01

    Background: Urine cadmium (Cd) and renal function biomarkers, mostly analysed in urine spot samples, are well established biomarkers of occupational exposure. Their use and associations at low environmental level are common, but have recently been questioned, particularly in terms of physiological variability and normalisation bias in the case of urine spot samples. Aim: To determine the appropriateness of spot urine and/or blood Cd exposure biomarkers and their relationships with renal function biomarkers at low levels of exposure. To this end, we used data from Slovenian human biomonitoring program involving 1081 Slovenians (548 males, mean age 31 years; 533 lactating females, mean age 29 years; 2007–2015) who have not been exposed to Cd occupationally. Results: Geometric means (GMs) of Cd in blood and spot urine samples were 0.27 ng/mL (0.28 for males and 0.33 for females) and 0.19 ng/mL (0.21 for males and 0.17 for females), respectively. Differing results were obtained when contrasting normalisation by urine creatinine with specific gravity. GMs of urine albumin (Alb), alpha-1-microglobulin (A1M), N-acetyl-beta-glucosaminidase (NAG), and immunoglobulin G (IgG) were far below their upper reference limits. Statistical analysis of unnormalised or normalised urine data often yielded inconsistent and conflicting results (or trends), so association analyses with unnormalised data were taken as more valid. Relatively weak positive associations were observed between urine Cd (ng/mL) and blood Cd (β=0.11, p=0.002 for males and β=0.33, p<0.001 for females) and for females between urine NAG and blood Cd (β=0.14, p=0.04). No associations were found between other renal function biomarkers and blood Cd. Associations between Cd and renal function biomarkers in urine were stronger (p<0.05, β=0.11–0.63). Mostly, all of the associations stayed significant but weakened after normalisation for diuresis. In the case of A1M, its associations with Cd were influenced by

  14. Reference doses and patient size in paediatric radiology

    International Nuclear Information System (INIS)

    Hart, D.; Wall, B.; Shrimpton, P.

    2000-01-01

    There is a wide range in patient size from a newborn baby to a 15 year old adolescent. Reference doses for paediatric radiology can sensibly be established only for specific sizes of children. Here five standard sizes have been chosen, representing 0 (newborn), 1, 5, 10 and 15 year old patients. This selection of standard ages has the advantage of matching the paediatric mathematical phantoms which are often used in Monte Carlo organ dose calculations. A method has been developed for calculating factors for normalising doses measured on individual children to those for the nearest standard-sized 'child'. These normalisation factors for entrance surface dose (ESD) and dose-area product (DAP) measurements depend on the thickness of the real child, the thickness of the nearest standard 'child', and an effective linear attenuation coefficient (μ) which is itself a function of the x-ray spectrum, the field size, and whether or not an antiscatter grid is used. Entrance and exit dose measurements were made with phantom material representing soft tissue to establish μ values for abdominal and head examinations, and with phantom material representing lung for chest examinations. These measurements of μ were confirmed and extended to other x-ray spectra and field sizes by Monte Carlo calculations. The normalisation factors are tabulated for ESD measurements for specific radiographic projections through the head and trunk, and for DAP measurements for complete multiprojection examinations in the trunk. The normalisation factors were applied to European survey data for entrance surface dose and dose-area product measurements to derive provisional reference doses for common radiographic projections and for micturating cystourethrography (MCU) examinations - the most frequent fluoroscopic examination on children. (author)

  15. Spatio-temporal footprints of urbanisation in Surat, the Diamond City of India (1990-2009).

    Science.gov (United States)

    Sharma, Richa; Ghosh, Aniruddha; Joshi, Pawan Kumar

    2013-04-01

    Urbanisation is a ubiquitous phenomenon with greater prominence in developing nations. Urban expansion involves land conversions from vegetated moisture-rich to impervious moisture-deficient land surfaces. The urban land transformations alter biophysical parameters in a mode that promotes development of heat islands and degrades environmental health. This study elaborates relationships among various environmental variables using remote sensing dataset to study spatio-temporal footprint of urbanisation in Surat city. Landsat Thematic Mapper satellite data were used in conjugation with geo-spatial techniques to study urbanisation and correlation among various satellite-derived biophysical parameters, [Normalised Difference Vegetation Index, Normalised Difference Built-up Index, Normalised Difference Water Index, Normalised Difference Bareness Index, Modified NDWI and land surface temperature (LST)]. Land use land cover was prepared using hierarchical decision tree classification with an accuracy of 90.4 % (kappa = 0.88) for 1990 and 85 % (kappa = 0.81) for 2009. It was found that the city has expanded over 42.75 km(2) within a decade, and these changes resulted in elevated surface temperatures. For example, transformation from vegetation to built-up has resulted in 5.5 ± 2.6 °C increase in land surface temperature, vegetation to fallow 6.7 ± 3 °C, fallow to built-up is 3.5 ± 2.9 °C and built-up to dense built-up is 5.3 ± 2.8 °C. Directional profiling for LST was done to study spatial patterns of LST in and around Surat city. Emergence of two new LST peaks for 2009 was observed in N-S and NE-SW profiles.

  16. Transient hyperthyroidism of hyperemesis gravidarum.

    Science.gov (United States)

    Tan, Jackie Y L; Loh, Keh Chuan; Yeo, George S H; Chee, Yam Cheng

    2002-06-01

    To characterise the clinical, biochemical and thyroid antibody profile in women with transient hyperthyroidism of hyperemesis gravidarum. Prospective observational study. Hospital inpatient gynaecological ward. Women admitted with hyperemesis gravidarum and found to have hyperthyroidism. Fifty-three women were admitted with hyperemesis gravidarum and were found to have hyperthyroidism. Each woman was examined for clinical signs of thyroid disease and underwent investigations including urea, creatinine, electrolytes, liver function test, thyroid antibody profile and serial thyroid function test until normalisation. Gestation at which thyroid function normalised, clinical and thyroid antibody profile and pregnancy outcome (birthweight, gestation at delivery and Apgar score at 5 minutes). Full data were available for 44 women. Free T4 levels normalised by 15 weeks of gestation in the 39 women with transient hyperthyroidism while TSH remained suppressed until 19 weeks of gestation. None of these women were clinically hyperthyroid. Thyroid antibodies were not found in most of them. Median birthweight in the infants of mothers who experienced weight loss of > 5% of their pre-pregnancy weight was lower compared with those of women who did not (P = 0.093). Five women were diagnosed with Graves' disease based on clinical features and thyroid antibody profile. In transient hyperthyroidism of hyperemesis gravidarum, thyroid function normalises by the middle of the second trimester without anti-thyroid treatment. Clinically overt hyperthyroidism and thyroid antibodies are usually absent. Apart from a non-significant trend towards lower birthweights in the infants of mothers who experienced significant weight loss, pregnancy outcome was generally good. Routine assessment of thyroid function is unnecessary for women with hyperemesis gravidarum in the absence of any clinical features of hyperthyroidism.

  17. [The effects of Cardiodoron on cardio-respiratory coordination--a literature review].

    Science.gov (United States)

    Cysarz, D; Heckmann, C; Kümmell, H C

    2002-10-01

    In healthy subjects self-regulation of the organism establishes the order of rhythmical functions. This self-regulation is altered in patients suffering from idiopathic orthostatic syndrome resulting from disturbances of functional aspects only. Thus the cardio-respiratory coordination, which may serve as the representative of the order of rhythmical functions, is modified. In the case of idiopathic orthostatic syndrome the anthroposophic medicine offers the medicament Cardiodoron(r). Does it stimulate self-regulation in order to normalise the cardio-respiratory coordination? This claim is analysed by a systematic review of the literature. Only those publications were considered where the cardio-respiratory coordination was analysed in studies with patients or healthy subjects. The methods of the studies with patients and healthy subjects vary strongly. Nevertheless, a normalisation of the cardio-respiratory coordination could be found in studies with patients suffering from idiopathic orthostatic syndrome as well as in studies with healthy subjects. The studies show that the use of the medicament results in a normalisation of the cardiorespiratory coordination. By stimulating the self-regulation the medicament leads to an improvement of the order of rhythmical functions in the human organism. Copyright 2002 S. Karger GmbH, Freiburg

  18. Analysis of gene expression data from non-small cell lung carcinoma cell lines reveals distinct sub-classes from those identified at the phenotype level.

    Directory of Open Access Journals (Sweden)

    Andrew R Dalby

    Full Text Available Microarray data from cell lines of Non-Small Cell Lung Carcinoma (NSCLC can be used to look for differences in gene expression between the cell lines derived from different tumour samples, and to investigate if these differences can be used to cluster the cell lines into distinct groups. Dividing the cell lines into classes can help to improve diagnosis and the development of screens for new drug candidates. The micro-array data is first subjected to quality control analysis and then subsequently normalised using three alternate methods to reduce the chances of differences being artefacts resulting from the normalisation process. The final clustering into sub-classes was carried out in a conservative manner such that sub-classes were consistent across all three normalisation methods. If there is structure in the cell line population it was expected that this would agree with histological classifications, but this was not found to be the case. To check the biological consistency of the sub-classes the set of most strongly differentially expressed genes was be identified for each pair of clusters to check if the genes that most strongly define sub-classes have biological functions consistent with NSCLC.

  19. E-IMPACT - A ROBUST HAZARD-BASED ENVIRONMENTAL IMPACT ASSESSMENT APPROACH FOR PROCESS INDUSTRIES

    Directory of Open Access Journals (Sweden)

    KHANDOKER A. HOSSAIN

    2008-04-01

    Full Text Available This paper proposes a hazard-based environmental impact assessment approach (E-Impact, for evaluating the environmental impact during process design and retrofit stages. E-Impact replaces the normalisation step of the conventional impact assessment phase. This approach compares the impact scores for different options and assigns a relative score to each option. This eliminates the complexity of the normalisation step in the evaluation phase. The applicability of the E-Impact has been illustrated through a case study of solvent selection in an acrylic acid manufacturing plant. E-Impact is used in conjunction with Aspen-HYSYS process simulator to develop mass and heat balance data.

  20. Testing of Laterally Loaded Rigid Piles with Applied Overburden Pressure

    DEFF Research Database (Denmark)

    Sørensen, Søren Peder Hyldal; Ibsen, Lars Bo; Foglia, Aligi

    2015-01-01

    Small-scale tests have been conducted to investigate the quasi-static behaviour of laterally loaded, non-slender piles installed in cohesionless soil. For that purpose, a new and innovative test setup has been developed. The tests have been conducted in a pressure tank such that it was possible...... to apply an overburden pressure to the soil. As a result of that, the traditional uncertainties related to low effective stresses for small-scale tests have been avoided. A normalisation criterion for laterally loaded piles has been proposed based on dimensional analysis. The test results using the novel...... testing method have been compared with the use of the normalisation criterion....

  1. Developments in national and international regulation in the field of ''corrosion protection of buried pipes''; Entwicklung im Bereich nationaler und internationaler Regelsetzung im Fachgebiet ''Korrosionsschutz erdverlegter Rohrleitungen''

    Energy Technology Data Exchange (ETDEWEB)

    Schoeneich, H.G. [E.ON Ruhrgas AG, Essen (Germany). Kompetenz-Center Korrosionsschutz

    2007-06-15

    This article summarizes the most important national and international rules for cathodic anti-corrosion protection of buried installations. The codes examined are those published by DIN (German Standardization Institute), the DVGW (German Association of Gas and Water Engineers) and AfK (Corrosion Protection Work Group). DIN publishes the results achieved by ISO (International Standardisation Organisation), CEN (Comite Europeen de Normalisation) and CENELEC (Comite Europeen de Normalisation Electrotechnique). The guidelines published by CEOCOR (European Committee for the Study of Corrosion and Protection of Pipes) are also briefly examined. Details of technical significance of a number of selected standards and revision projects are also stated and discussed. (orig.)

  2. Interaktion mellem warfarin og oral miconazol-gel

    DEFF Research Database (Denmark)

    Ogard, C G; Vestergaard, Henrik

    2000-01-01

    We report a case of a 76 year-old woman who had been taking warfarin for seven years because of relapsing deep venous thrombosis. Her daily maintenance dose was 5 mg. Monthly measurements of international normalised ratio (INR) were stable between 2-3. She developed oral candidiasis and miconazole...... gel was prescribed. One week later she developed bleeding gums. Eight days later she was admitted to the hospital with haematuria. INR was > 10. Warfarin and the miconazole gel were withdrawn. She was treated with phytonadione. INR normalised after four days and she continued warfarin treatment....... Caution should be exercised whenever the combination of warfarin and miconazole gel are prescribed....

  3. More evidence of localization in the low-lying Dirac spectrum

    CERN Document Server

    Bernard, C; Gottlieb, Steven; Levkova, L.; Heller, U.M.; Hetrick, J.E.; Jahn, O.; Maresca, F.; Renner, Dru Bryant; Toussaint, D.; Sugar, R.; Forcrand, Ph. de; Gottlieb, Steven

    2006-01-01

    We have extended our computation of the inverse participation ratio of low-lying (asqtad) Dirac eigenvectors in quenched SU(3). The scaling dimension of the confining manifold is clearer and very near 3. We have also computed the 2-point correlator which further characterizes the localization.

  4. Mode repulsion of ultrasonic guided waves in rails

    CSIR Research Space (South Africa)

    Loveday, Philip W

    2018-03-01

    Full Text Available . The modes can therefore be numbered in the same way that Lamb waves in plates are numbered, making it easier to communicate results. The derivative of the eigenvectors with respect to wavenumber contains the same repulsion term and shows how the mode shapes...

  5. Using many pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    over the model area. Singular value decomposition (SVD) of the normal matrix is used to reduce the large number of pilot point parameters to a smaller number of so-called super parameters that can be estimated by nonlinear regression from the available observations. A number of eigenvectors...

  6. Neutral evolution of mutational robustness

    NARCIS (Netherlands)

    Nimwegen, Erik van; Crutchfield, James P.; Huynen, Martijn

    1999-01-01

    We introduce and analyze a general model of a population evolving over a network of selectively neutral genotypes. We show that the population s limit distribution on the neutral network is solely determined by the network topology and given by the principal eigenvector of the network

  7. Power Grid Modelling From Wind Turbine Perspective Using Principal Componenet Analysis

    DEFF Research Database (Denmark)

    Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter

    2015-01-01

    In this study, we derive an eigenvector-based multivariate model of a power grid from the wind farm's standpoint using dynamic principal component analysis (DPCA). The main advantages of our model over previously developed models are being more realistic and having low complexity. We show that th...

  8. Extended Park's transformation for 2×3-phase synchronous machine and converter phasor model with representation of AC harmonics

    DEFF Research Database (Denmark)

    Knudsen, Hans

    1995-01-01

    A model of the 2×3-phase synchronous machine is presented using a new transformation based on the eigenvectors of the stator inductance matrix. The transformation fully decouples the stator inductance matrix, and this leads to an equivalent diagram of the machine with no mutual couplings...

  9. Bone histology, phylogeny, and palaeognathous birds (Aves, Palaeognathae)

    DEFF Research Database (Denmark)

    Legendre, Lucas; Bourdon, Estelle; Scofield, Paul

    2014-01-01

    a comprehensive study in which we quantify the phylogenetic signal on 62 osteohistological features in an exhaustive sample of palaeognathous birds. We used four different estimators to measure phylogenetic signal – Pagel’s λ, Abouheif’s Cmean, Blomberg’s K, and Diniz-Filho’s phylogenetic eigenvector regressions...

  10. Minute splitting of magnetic excitations in CsFeCl{sub 3} due to dipolar interaction observed by polarised neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Dorner, B [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Baehr, M [HMI, Berlin (Germany); Petitgrand, D [Laboratoire Leon Brillouin (LLB) - Centre d` Etudes de Saclay, 91 - Gif-sur-Yvette (France)

    1997-04-01

    Using inelastic neutron scattering with polarisation analysis it was possible, for the first time, to observe simultaneously the two magnetic modes split due to dipolar interaction. This would not have been possible with energy resolution only. An analysis of eigenvectors was also performed. (author). 4 refs.

  11. Spin wave spectrum and zero spin fluctuation of antiferromagnetic solid 3He

    International Nuclear Information System (INIS)

    Roger, M.; Delrieu, J.M.

    1981-08-01

    The spin wave spectrum and eigenvectors of the uudd antiferromagnetic phase of solid 3 He are calculated; an optical mode is predicted around 150 - 180 Mc and a zero point spin deviation of 0.74 is obtained in agreement with the antiferromagnetic resonance frequency measured by Osheroff

  12. The non-linear Perron-Frobenius theorem : Perturbations and aggregation

    NARCIS (Netherlands)

    Dietzenbacher, E

    The dominant eigenvalue and the corresponding eigenvector (or Perron vector) of a non-linear eigensystem are considered. We discuss the effects upon these, of perturbations and of aggregation of the underlying mapping. The results are applied to study the sensivity of the outputs in a non-linear

  13. Jacobi-Davidson methods for generalized MHD-eigenvalue problems

    NARCIS (Netherlands)

    J.G.L. Booten; D.R. Fokkema; G.L.G. Sleijpen; H.A. van der Vorst (Henk)

    1995-01-01

    textabstractA Jacobi-Davidson algorithm for computing selected eigenvalues and associated eigenvectors of the generalized eigenvalue problem $Ax = lambda Bx$ is presented. In this paper the emphasis is put on the case where one of the matrices, say the B-matrix, is Hermitian positive definite. The

  14. Simultaneous maximization of spatial and temporal autocorrelation in spatio-temporal data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2002-01-01

    . This is done by solving the generalized eigenproblem represented by the Rayleigh coefficient where is the dispersion of and is the dispersion of the difference between and spatially shifted. Hence, the new variates are obtained from the conjugate eigenvectors and the autocorrelations obtained are , i.e., high...

  15. The Modern Origin of Matrices and Their Applications

    Science.gov (United States)

    Debnath, L.

    2014-01-01

    This paper deals with the modern development of matrices, linear transformations, quadratic forms and their applications to geometry and mechanics, eigenvalues, eigenvectors and characteristic equations with applications. Included are the representations of real and complex numbers, and quaternions by matrices, and isomorphism in order to show…

  16. An expert system in medical diagnosis

    International Nuclear Information System (INIS)

    Raboanary, R.; Raoelina Andriambololona; Soffer, J.; Raboanary, J.

    2001-01-01

    Health problem is still a crucial one in some countries. It is so important that it becomes a major handicap in economic and social development. In order to solve this problem, we have conceived an expert system that we called MITSABO, which means TO HEAL, to help the physicians to diagnose tropical diseases. It is clear that by extending the data base and the knowledge base, we can extend the application of the software to more general areas. In our expert system, we used the concept of 'self organization' of neural network based on the determination of the eigenvalues and the eigenvectors associated to the correlation matrix XX t . The projection of the data on the two first eigenvectors gives a classification of the diseases which is used to get a first approach in the diagnosis of the patient. This diagnosis is improved by using an expert system which is built from the knowledge base.

  17. Cavity approach to the first eigenvalue problem in a family of symmetric random sparse matrices

    International Nuclear Information System (INIS)

    Kabashima, Yoshiyuki; Takahashi, Hisanao; Watanabe, Osamu

    2010-01-01

    A methodology to analyze the properties of the first (largest) eigenvalue and its eigenvector is developed for large symmetric random sparse matrices utilizing the cavity method of statistical mechanics. Under a tree approximation, which is plausible for infinitely large systems, in conjunction with the introduction of a Lagrange multiplier for constraining the length of the eigenvector, the eigenvalue problem is reduced to a bunch of optimization problems of a quadratic function of a single variable, and the coefficients of the first and the second order terms of the functions act as cavity fields that are handled in cavity analysis. We show that the first eigenvalue is determined in such a way that the distribution of the cavity fields has a finite value for the second order moment with respect to the cavity fields of the first order coefficient. The validity and utility of the developed methodology are examined by applying it to two analytically solvable and one simple but non-trivial examples in conjunction with numerical justification.

  18. Excitations

    International Nuclear Information System (INIS)

    Dorner, B.

    1996-01-01

    A short introduction to instrumental resolution is followed by a discussion of visibilities of phonon modes due to their eigenvectors. High precision phonon dispersion curves in GaAs are presented together with 'ab initio' calculations. Al 2 O 3 is taken as an example of selected visibility due to group theory. By careful determination of phonon intensities eigenvectors can be determined, such as in Silicon and Diamond. The investigation of magnon modes is shown for the garnet Fe 2 Ca 3 (GeO 4 ) 3 , where also a quantum gap due to zero point spin fluctuations was observed. The study of the splitting of excitons in CsFeCl 3 in an applied magnetic field demonstrates the possibilities of neutron polarisation analysis, which made it possible to observe a mode crossing. An outlook to inelastic X-ray scattering with very high energy resolution of synchrotron radiation is given with the examples of phonons in Beryllium and in water. (author) 19 figs., 36 refs

  19. Centrality metrics and localization in core-periphery networks

    International Nuclear Information System (INIS)

    Barucca, Paolo; Lillo, Fabrizio; Tantari, Daniele

    2016-01-01

    Two concepts of centrality have been defined in complex networks. The first considers the centrality of a node and many different metrics for it have been defined (e.g. eigenvector centrality, PageRank, non-backtracking centrality, etc). The second is related to large scale organization of the network, the core-periphery structure, composed by a dense core plus an outlying and loosely-connected periphery. In this paper we investigate the relation between these two concepts. We consider networks generated via the stochastic block model, or its degree corrected version, with a core-periphery structure and we investigate the centrality properties of the core nodes and the ability of several centrality metrics to identify them. We find that the three measures with the best performance are marginals obtained with belief propagation, PageRank, and degree centrality, while non-backtracking and eigenvector centrality (or MINRES [10], showed to be equivalent to the latter in the large network limit) perform worse in the investigated networks. (paper: interdisciplinary statistical mechanics )

  20. Centrality measures in temporal networks with time series analysis

    Science.gov (United States)

    Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Wang, Xiaojie; Yi, Dongyun

    2017-05-01

    The study of identifying important nodes in networks has a wide application in different fields. However, the current researches are mostly based on static or aggregated networks. Recently, the increasing attention to networks with time-varying structure promotes the study of node centrality in temporal networks. In this paper, we define a supra-evolution matrix to depict the temporal network structure. With using of the time series analysis, the relationships between different time layers can be learned automatically. Based on the special form of the supra-evolution matrix, the eigenvector centrality calculating problem is turned into the calculation of eigenvectors of several low-dimensional matrices through iteration, which effectively reduces the computational complexity. Experiments are carried out on two real-world temporal networks, Enron email communication network and DBLP co-authorship network, the results of which show that our method is more efficient at discovering the important nodes than the common aggregating method.

  1. Predicting the impact of urban flooding using open data.

    Science.gov (United States)

    Tkachenko, Nataliya; Procter, Rob; Jarvis, Stephen

    2016-05-01

    This paper aims to explore whether there is a relationship between search patterns for flood risk information on the Web and how badly localities have been affected by flood events. We hypothesize that localities where people stay more actively informed about potential flooding experience less negative impact than localities where people make less effort to be informed. Being informed, of course, does not hold the waters back; however, it may stimulate (or serve as an indicator of) such resilient behaviours as timely use of sandbags, relocation of possessions from basements to upper floors and/or temporary evacuation from flooded homes to alternative accommodation. We make use of open data to test this relationship empirically. Our results demonstrate that although aggregated Web search reflects average rainfall patterns, its eigenvectors predominantly consist of locations with similar flood impacts during 2014-2015. These results are also consistent with statistically significant correlations of Web search eigenvectors with flood warning and incident reporting datasets.

  2. On the relationship between Gaussian stochastic blockmodels and label propagation algorithms

    International Nuclear Information System (INIS)

    Zhang, Junhao; Hu, Junfeng; Chen, Tongfei

    2015-01-01

    The problem of community detection has received great attention in recent years. Many methods have been proposed to discover communities in networks. In this paper, we propose a Gaussian stochastic blockmodel that uses Gaussian distributions to fit weight of edges in networks for non-overlapping community detection. The maximum likelihood estimation of this model has the same objective function as general label propagation with node preference. The node preference of a specific vertex turns out to be a value proportional to the intra-community eigenvector centrality (the corresponding entry in principal eigenvector of the adjacency matrix of the subgraph inside that vertex's community) under maximum likelihood estimation. Additionally, the maximum likelihood estimation of a constrained version of our model is highly related to another extension of the label propagation algorithm, namely, the label propagation algorithm under constraint. Experiments show that the proposed Gaussian stochastic blockmodel performs well on various benchmark networks. (paper)

  3. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    Science.gov (United States)

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  4. Evaluation of the synchrotron close orbit

    International Nuclear Information System (INIS)

    Bashmakov, Yu.A.; Karpov, V.A.

    1991-01-01

    The knowledge of the closed orbit position is an essential condition for the effective work of any accelerator. Therefore questions of calculations, measurements and controls have great importance. For example, during injection of particles into a synchrotron, the amplitudes of their betatron oscillations may become commensurable with the working region of the synchrotron. This makes one pay attention at the problem of formation of the optimum orbit with use of correcting optical elements. In addition, it is often necessary to calculate such an orbit at the end of the acceleration cycle when particles are deposited at internal targets or removed from the synchrotron. In this paper, the computation of the close orbit is reduced to a determination at an arbitrarily chosen azimuth of the eigenvector of the total transfer matrix of the synchrotron ring and to tracing with this vector desired orbit. The eigenvector is found as a result of an iteration

  5. Gamow-Jordan vectors and non-reducible density operators from higher-order S-matrix poles

    International Nuclear Information System (INIS)

    Bohm, A.; Loewe, M.; Maxson, S.; Patuleanu, P.; Puentmann, C.; Gadella, M.

    1997-01-01

    In analogy to Gamow vectors that are obtained from first-order resonance poles of the S-matrix, one can also define higher-order Gamow vectors which are derived from higher-order poles of the S-matrix. An S-matrix pole of r-th order at z R =E R -iΓ/2 leads to r generalized eigenvectors of order k=0,1,hor-ellipsis,r-1, which are also Jordan vectors of degree (k+1) with generalized eigenvalue (E R -iΓ/2). The Gamow-Jordan vectors are elements of a generalized complex eigenvector expansion, whose form suggests the definition of a state operator (density matrix) for the microphysical decaying state of this higher-order pole. This microphysical state is a mixture of non-reducible components. In spite of the fact that the k-th order Gamow-Jordan vectors has the polynomial time-dependence which one always associates with higher-order poles, the microphysical state obeys a purely exponential decay law. copyright 1997 American Institute of Physics

  6. Efficacy and safety of dabigatran compared with warfarin at different levels of international normalised ratio control for stroke prevention in atrial fibrillation: an analysis of the RE-LY trial.

    Science.gov (United States)

    Wallentin, Lars; Yusuf, Salim; Ezekowitz, Michael D; Alings, Marco; Flather, Marcus; Franzosi, Maria Grazia; Pais, Prem; Dans, Antonio; Eikelboom, John; Oldgren, Jonas; Pogue, Janice; Reilly, Paul A; Yang, Sean; Connolly, Stuart J

    2010-09-18

    Effectiveness and safety of warfarin is associated with the time in therapeutic range (TTR) with an international normalised ratio (INR) of 2·0-3·0. In the Randomised Evaluation of Long-term Anticoagulation Therapy (RE-LY) trial, dabigatran versus warfarin reduced both stroke and haemorrhage. We aimed to investigate the primary and secondary outcomes of the RE-LY trial in relation to each centre's mean TTR (cTTR) in the warfarin population. In the RE-LY trial, 18 113 patients at 951 sites were randomly assigned to 110 mg or 150 mg dabigatran twice daily versus warfarin dose adjusted to INR 2·0-3·0. Median follow-up was 2·0 years. For 18 024 patients at 906 sites, the cTTR was estimated by averaging TTR for individual warfarin-treated patients calculated by the Rosendaal method. We compared the outcomes of RE-LY across the three treatment groups within four groups defined by the quartiles of cTTR. RE-LY is registered with ClinicalTrials.gov, number NCT00262600. The quartiles of cTTR for patients in the warfarin group were: less than 57·1%, 57·1-65·5%, 65·5-72·6%, and greater than 72·6%. There were no significant interactions between cTTR and prevention of stroke and systemic embolism with either 110 mg dabigatran (interaction p=0·89) or 150 mg dabigatran (interaction p=0·20) versus warfarin. Neither were any significant interactions recorded with cTTR with regards to intracranial bleeding with 110 mg dabigatran (interaction p=0·71) or 150 mg dabigatran (interaction p=0·89) versus warfarin. There was a significant interaction between cTTR and major bleeding when comparing 150 mg dabigatran with warfarin (interaction p=0·03), with less bleeding events at lower cTTR but similar events at higher cTTR, whereas rates of major bleeding were lower with 110 mg dabigatran than with warfarin irrespective of cTTR. There were significant interactions between cTTR and effects of both 110 mg and 150 mg dabigatran versus warfarin on the composite of all

  7. Non-self-adjoint hamiltonians defined by Riesz bases

    Energy Technology Data Exchange (ETDEWEB)

    Bagarello, F., E-mail: fabio.bagarello@unipa.it [Dipartimento di Energia, Ingegneria dell' Informazione e Modelli Matematici, Facoltà di Ingegneria, Università di Palermo, I-90128 Palermo, Italy and INFN, Università di Torino, Torino (Italy); Inoue, A., E-mail: a-inoue@fukuoka-u.ac.jp [Department of Applied Mathematics, Fukuoka University, Fukuoka 814-0180 (Japan); Trapani, C., E-mail: camillo.trapani@unipa.it [Dipartimento di Matematica e Informatica, Università di Palermo, I-90123 Palermo (Italy)

    2014-03-15

    We discuss some features of non-self-adjoint Hamiltonians with real discrete simple spectrum under the assumption that the eigenvectors form a Riesz basis of Hilbert space. Among other things, we give conditions under which these Hamiltonians can be factorized in terms of generalized lowering and raising operators.

  8. Maslov indices and monodromy

    International Nuclear Information System (INIS)

    Dullin, H R; Robbins, J M; Waalkens, H; Creagh, S C; Tanner, G

    2005-01-01

    We prove that for a Hamiltonian system on a cotangent bundle that is Liouville-integrable and has monodromy the vector of Maslov indices is an eigenvector of the monodromy matrix with eigenvalue 1. As a corollary, the resulting restrictions on the monodromy matrix are derived. (letter to the editor)

  9. Certain properties of some special functions of two variables and two indices

    International Nuclear Information System (INIS)

    Khan, Subuhi

    2002-07-01

    In this paper, we derive a result concerning eigenvector and eigenvalue for a quadratic combination of four operators defined on a Lie algebra of endomorphisms of a vector space. Further, using this result, we deduce certain properties of some special functions of two variables and two indices. (author)

  10. Biological Applications in the Mathematics Curriculum

    Science.gov (United States)

    Marland, Eric; Palmer, Katrina M.; Salinas, Rene A.

    2008-01-01

    In this article we provide two detailed examples of how we incorporate biological examples into two mathematics courses: Linear Algebra and Ordinary Differential Equations. We use Leslie matrix models to demonstrate the biological properties of eigenvalues and eigenvectors. For Ordinary Differential Equations, we show how using a logistic growth…

  11. Perfect state transfer in unitary Cayley graphs over local rings

    Directory of Open Access Journals (Sweden)

    Yotsanan Meemark

    2014-12-01

    Full Text Available In this work, using eigenvalues and eigenvectors of unitary Cayley graphs over finite local rings and elementary linear algebra, we characterize which local rings allowing PST occurring in its unitary Cayley graph. Moreover, we have some developments when $R$ is a product of local rings.

  12. A computational approach for a fluid queue driven by a truncated birth-death process

    NARCIS (Netherlands)

    Lenin, R.B.; Parthasarathy, P.R.

    1999-01-01

    In this paper, we consider a fluid queue driven by a truncated birth-death process with general birth and death rates. We find the equilibrium distribution of the content of the fluid buffer by computing the eigenvalues and eigenvectors of an associated real tridiagonal matrix. We provide efficient

  13. Complex Wedge-Shaped Matrices: A Generalization of Jacobi Matrices

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, Iveta; Plešinger, M.

    2015-01-01

    Roč. 487, 15 December (2015), s. 203-219 ISSN 0024-3795 R&D Projects: GA ČR GA13-06684S Keywords : eigenvalues * eigenvector * wedge-shaped matrices * generalized Jacobi matrices * band (or block) Krylov subspace methods Subject RIV: BA - General Mathematics Impact factor: 0.965, year: 2015

  14. Validation of the CoaguChek XS international normalised ratio point ...

    African Journals Online (AJOL)

    laboratory automated coagulation analysers up to INR values of. 3.0. ... To evaluate the clinical utility of the CoaguChek XS for monitoring of patients on standard warfarin therapy (INR 2 - 3) as well .... quality control system with satisfactory.

  15. Semi-supervised probabilistics approach for normalising informal short text messages

    CSIR Research Space (South Africa)

    Modupe, A

    2017-03-01

    Full Text Available The growing use of informal social text messages on Twitter is one of the known sources of big data. These type of messages are noisy and frequently rife with acronyms, slangs, grammatical errors and non-standard words causing grief for natural...

  16. Using Normalised Sections for the Design of all optical Networks

    DEFF Research Database (Denmark)

    Caspar, C.; Freund, Ronald; Hanik, Norbert

    2000-01-01

    A novel concept for transparent link design is presented, and evaluated numerically and experimentally. 10 Gbit/s single channel transmission over more than 4000 km of Standard Single Mode Fibre is demonstrated. At reduced transmission distances, the systems show a high robustness against variati...

  17. HIV scale-up in Mozambique: Exceptionalism, normalisation and global health

    Science.gov (United States)

    Høg, Erling

    2014-01-01

    The large-scale introduction of HIV and AIDS services in Mozambique from 2000 onwards occurred in the context of deep political commitment to sovereign nation-building and an important transition in the nation's health system. Simultaneously, the international community encountered a willing state partner that recognised the need to take action against the HIV epidemic. This article examines two critical policy shifts: sustained international funding and public health system integration (the move from parallel to integrated HIV services). The Mozambican government struggles to support its national health system against privatisation, NGO competition and internal brain drain. This is a sovereignty issue. However, the dominant discourse on self-determination shows a contradictory twist: it is part of the political rhetoric to keep the sovereignty discourse alive, while the real challenge is coordination, not partnerships. Nevertheless, we need more anthropological studies to understand the political implications of global health funding and governance. Other studies need to examine the consequences of public health system integration for the quality of access to health care. PMID:24499102

  18. (measured as NDVI) over mine tailings at Mhangura Copper Mine

    African Journals Online (AJOL)

    chari

    Remote sensing techniques are increasingly being employed in monitoring environmental ... normalised difference vegetation index (NDVI), remote sensing, tailings ..... rehabilitation monitoring by adding landscape function characteristics.

  19. Reducing System Artifacts in Hyperspectral Image Data Analysis with the Use of Estimates of the Error Covariance in the Data; TOPICAL

    International Nuclear Information System (INIS)

    HAALAND, DAVID M.; VAN BENTHEM, MARK H.; WEHLBURG, CHRISTINE M.; KOEHLER, IV FREDERICK W.

    2002-01-01

    Hyperspectral Fourier transform infrared images have been obtained from a neoprene sample aged in air at elevated temperatures. The massive amount of spectra available from this heterogeneous sample provides the opportunity to perform quantitative analysis of the spectral data without the need for calibration standards. Multivariate curve resolution (MCR) methods with non-negativity constraints applied to the iterative alternating least squares analysis of the spectral data has been shown to achieve the goal of quantitative image analysis without the use of standards. However, the pure-component spectra and the relative concentration maps were heavily contaminated by the presence of system artifacts in the spectral data. We have demonstrated that the detrimental effects of these artifacts can be minimized by adding an estimate of the error covariance structure of the spectral image data to the MCR algorithm. The estimate is added by augmenting the concentration and pure-component spectra matrices with scores and eigenvectors obtained from the mean-centered repeat image differences of the sample. The implementation of augmentation is accomplished by employing efficient equality constraints on the MCR analysis. Augmentation with the scores from the repeat images is found to primarily improve the pure-component spectral estimates while augmentation with the corresponding eigenvectors primarily improves the concentration maps. Augmentation with both scores and eigenvectors yielded the best result by generating less noisy pure-component spectral estimates and relative concentration maps that were largely free from a striping artifact that is present due to system errors in the FT-IR images. The MCR methods presented are general and can also be applied productively to non-image spectral data

  20. Lagrangian investigations of vorticity dynamics in compressible turbulence

    Science.gov (United States)

    Parashar, Nishant; Sinha, Sawan Suman; Danish, Mohammad; Srinivasan, Balaji

    2017-10-01

    In this work, we investigate the influence of compressibility on vorticity-strain rate dynamics. Well-resolved direct numerical simulations of compressible homogeneous isotropic turbulence performed over a cubical domain of 10243 are employed for this study. To clearly identify the influence of compressibility on the time-dependent dynamics (rather than on the one-time flow field), we employ a well-validated Lagrangian particle tracker. The tracker is used to obtain time correlations between the instantaneous vorticity vector and the strain-rate eigenvector system of an appropriately chosen reference time. In this work, compressibility is parameterized in terms of both global (turbulent Mach number) and local parameters (normalized dilatation-rate and flow field topology). Our investigations reveal that the local dilatation rate significantly influences these statistics. In turn, this observed influence of the dilatation rate is predominantly associated with rotation dominated topologies (unstable-focus-compressing, stable-focus-stretching). We find that an enhanced dilatation rate (in both contracting and expanding fluid elements) significantly enhances the tendency of the vorticity vector to align with the largest eigenvector of the strain-rate. Further, in fluid particles where the vorticity vector is maximally misaligned (perpendicular) at the reference time, vorticity does show a substantial tendency to align with the intermediate eigenvector as well. The authors make an attempt to provide physical explanations of these observations (in terms of moment of inertia and angular momentum) by performing detailed calculations following tetrads {approach of Chertkov et al. ["Lagrangian tetrad dynamics and the phenomenology of turbulence," Phys. Fluids 11(8), 2394-2410 (1999)] and Xu et al. ["The pirouette effect in turbulent flows," Nat. Phys. 7(9), 709-712 (2011)]} in a compressible flow field.

  1. Social structure of a semi-free ranging group of mandrills (Mandrillus sphinx: a social network analysis.

    Directory of Open Access Journals (Sweden)

    Céline Bret

    Full Text Available The difficulty involved in following mandrills in the wild means that very little is known about social structure in this species. Most studies initially considered mandrill groups to be an aggregation of one-male/multifemale units, with males occupying central positions in a structure similar to those observed in the majority of baboon species. However, a recent study hypothesized that mandrills form stable groups with only two or three permanent males, and that females occupy more central positions than males within these groups. We used social network analysis methods to examine how a semi-free ranging group of 19 mandrills is structured. We recorded all dyads of individuals that were in contact as a measure of association. The betweenness and the eigenvector centrality for each individual were calculated and correlated to kinship, age and dominance. Finally, we performed a resilience analysis by simulating the removal of individuals displaying the highest betweenness and eigenvector centrality values. We found that related dyads were more frequently associated than unrelated dyads. Moreover, our results showed that the cumulative distribution of individual betweenness and eigenvector centrality followed a power function, which is characteristic of scale-free networks. This property showed that some group members, mostly females, occupied a highly central position. Finally, the resilience analysis showed that the removal of the two most central females split the network into small subgroups and increased the network diameter. Critically, this study confirms that females appear to occupy more central positions than males in mandrill groups. Consequently, these females appear to be crucial for group cohesion and probably play a pivotal role in this species.

  2. Social structure of a semi-free ranging group of mandrills (Mandrillus sphinx): a social network analysis.

    Science.gov (United States)

    Bret, Céline; Sueur, Cédric; Ngoubangoye, Barthélémy; Verrier, Delphine; Deneubourg, Jean-Louis; Petit, Odile

    2013-01-01

    The difficulty involved in following mandrills in the wild means that very little is known about social structure in this species. Most studies initially considered mandrill groups to be an aggregation of one-male/multifemale units, with males occupying central positions in a structure similar to those observed in the majority of baboon species. However, a recent study hypothesized that mandrills form stable groups with only two or three permanent males, and that females occupy more central positions than males within these groups. We used social network analysis methods to examine how a semi-free ranging group of 19 mandrills is structured. We recorded all dyads of individuals that were in contact as a measure of association. The betweenness and the eigenvector centrality for each individual were calculated and correlated to kinship, age and dominance. Finally, we performed a resilience analysis by simulating the removal of individuals displaying the highest betweenness and eigenvector centrality values. We found that related dyads were more frequently associated than unrelated dyads. Moreover, our results showed that the cumulative distribution of individual betweenness and eigenvector centrality followed a power function, which is characteristic of scale-free networks. This property showed that some group members, mostly females, occupied a highly central position. Finally, the resilience analysis showed that the removal of the two most central females split the network into small subgroups and increased the network diameter. Critically, this study confirms that females appear to occupy more central positions than males in mandrill groups. Consequently, these females appear to be crucial for group cohesion and probably play a pivotal role in this species.

  3. Comparison of eigensolvers for symmetric band matrices.

    Science.gov (United States)

    Moldaschl, Michael; Gansterer, Wilfried N

    2014-09-15

    We compare different algorithms for computing eigenvalues and eigenvectors of a symmetric band matrix across a wide range of synthetic test problems. Of particular interest is a comparison of state-of-the-art tridiagonalization-based methods as implemented in Lapack or Plasma on the one hand, and the block divide-and-conquer (BD&C) algorithm as well as the block twisted factorization (BTF) method on the other hand. The BD&C algorithm does not require tridiagonalization of the original band matrix at all, and the current version of the BTF method tridiagonalizes the original band matrix only for computing the eigenvalues. Avoiding the tridiagonalization process sidesteps the cost of backtransformation of the eigenvectors. Beyond that, we discovered another disadvantage of the backtransformation process for band matrices: In several scenarios, a lot of gradual underflow is observed in the (optional) accumulation of the transformation matrix and in the (obligatory) backtransformation step. According to the IEEE 754 standard for floating-point arithmetic, this implies many operations with subnormal (denormalized) numbers, which causes severe slowdowns compared to the other algorithms without backtransformation of the eigenvectors. We illustrate that in these cases the performance of existing methods from Lapack and Plasma reaches a competitive level only if subnormal numbers are disabled (and thus the IEEE standard is violated). Overall, our performance studies illustrate that if the problem size is large enough relative to the bandwidth, BD&C tends to achieve the highest performance of all methods if the spectrum to be computed is clustered. For test problems with well separated eigenvalues, the BTF method tends to become the fastest algorithm with growing problem size.

  4. Factors influencing the robustness of P-value measurements in CT texture prognosis studies

    Science.gov (United States)

    McQuaid, Sarah; Scuffham, James; Alobaidli, Sheaka; Prakash, Vineet; Ezhil, Veni; Nisbet, Andrew; South, Christopher; Evans, Philip

    2017-07-01

    Several studies have recently reported on the value of CT texture analysis in predicting survival, although the topic remains controversial, with further validation needed in order to consolidate the evidence base. The aim of this study was to investigate the effect of varying the input parameters in the Kaplan-Meier analysis, to determine whether the resulting P-value can be considered to be a robust indicator of the parameter’s prognostic potential. A retrospective analysis of the CT-based normalised entropy of 51 patients with lung cancer was performed and overall survival data for these patients were collected. A normalised entropy cut-off was chosen to split the patient cohort into two groups and log-rank testing was performed to assess the survival difference of the two groups. This was repeated for varying normalised entropy cut-offs and varying follow-up periods. Our findings were also compared with previously published results to assess robustness of this parameter in a multi-centre patient cohort. The P-value was found to be highly sensitive to the choice of cut-off value, with small changes in cut-off producing substantial changes in P. The P-value was also sensitive to follow-up period, with particularly noisy results at short follow-up periods. Using matched conditions to previously published results, a P-value of 0.162 was obtained. Survival analysis results can be highly sensitive to the choice in texture cut-off value in dichotomising patients, which should be taken into account when performing such studies to avoid reporting false positive results. Short follow-up periods also produce unstable results and should therefore be avoided to ensure the results produced are reproducible. Previously published findings that indicated the prognostic value of normalised entropy were not replicated here, but further studies with larger patient numbers would be required to determine the cause of the different outcomes.

  5. Welfare Effects of Tax and Price Changes and the CES-UT Utility Function

    DEFF Research Database (Denmark)

    Munk, Knud Jørgen

    Dixit's 1975 paper "Welfare Effects of Tax and Price Changes" constitutes a seminal contribution to the theory of tax reform within a second-best general equilibrium framework. The present paper clarifies ambiguities with respect to normalisation which has led to misinterpretation of some of Dixit......'s analytical results. It proves that a marginal tax reform starting from a proportional tax system will improve social welfare if it increases the supply of labour, whatever the rule of normalisation adopted. In models which impose additive separability between consumption and leisure in household preferences...... elasticities can be derived from the parameters of the CES-UT and how it may be used for applied tax reform analysis...

  6. Contemporary Transitional Justice

    DEFF Research Database (Denmark)

    Gissel, Line Engbo

    2017-01-01

    This article studies the contemporary expression of transitional justice, a field of practice through which global governance is exercised. It argues that transitional justice is being normalised, given the normative and empirical de-legitimisation of its premise of exceptionalism. The article...... theorises exceptionalism and normalcy in transitional justice and identifies three macro-level causes of normalisation: the legalisation, internationalisation, and professionalization of the field. This argument is illustrated by a study of Uganda’s trajectory of transitional justice since 1986. Across five...... phases of transitional justice, processes of legalisation, internationalisation, and professionalization have contributed to the gradual dismantling of the country’s exceptional justice. The case demonstrates, further, that normalization is a contested and incomplete process....

  7. Effects of lixisenatide on elevated liver transaminases

    DEFF Research Database (Denmark)

    Gluud, Lise L; Knop, Filip K; Vilsbøll, Tina

    2014-01-01

    ) on lixisenatide versus placebo or active comparators for type 2 diabetes were included. PARTICIPANTS: Individual patient data were retrieved to calculate outcomes for patients with elevated liver blood tests. MAIN OUTCOME MEASURES: Normalisation of alanine aminotransferase (ALT) and aspartate aminotransferase......OBJECTIVE: To evaluate the effects of the glucagon-like peptide-1 receptor agonist lixisenatide on elevated liver blood tests in patients with type 2 diabetes. DESIGN: Systematic review. DATA SOURCES: Electronic and manual searches were combined. STUDY SELECTION: Randomised controlled trials (RCTs...... the proportion of obese or overweight patients with type 2 diabetes who achieve normalisation of ALT. Additional research is needed to determine if the findings translate to clinical outcome measures. TRIAL REGISTRATION NUMBER: PROSPERO; CRD42013005779....

  8. Text analysis of MEDLINE for discovering functional relationships among genes: evaluation of keyword extraction weighting schemes.

    Science.gov (United States)

    Liu, Ying; Navathe, Shamkant B; Pivoshenko, Alex; Dasigi, Venu G; Dingledine, Ray; Ciliax, Brian J

    2006-01-01

    One of the key challenges of microarray studies is to derive biological insights from the gene-expression patterns. Clustering genes by functional keyword association can provide direct information about the functional links among genes. However, the quality of the keyword lists significantly affects the clustering results. We compared two keyword weighting schemes: normalised z-score and term frequency-inverse document frequency (TFIDF). Two gene sets were tested to evaluate the effectiveness of the weighting schemes for keyword extraction for gene clustering. Using established measures of cluster quality, the results produced from TFIDF-weighted keywords outperformed those produced from normalised z-score weighted keywords. The optimised algorithms should be useful for partitioning genes from microarray lists into functionally discrete clusters.

  9. Semiclassical geometry of integrable systems

    Science.gov (United States)

    Reshetikhin, Nicolai

    2018-04-01

    The main result of this paper is a formula for the scalar product of semiclassical eigenvectors of two integrable systems on the same symplectic manifold. An important application of this formula is the Ponzano–Regge type of asymptotic of Racah–Wigner coefficients. Dedicated to the memory of P P Kulish.

  10. Algebraic Bethe ansatz for 19-vertex models with reflection conditions

    International Nuclear Information System (INIS)

    Utiel, Wagner

    2003-01-01

    In this work we solve the 19-vertex models with the use of algebraic Bethe ansatz for diagonal reflection matrices (Sklyanin K-matrices). The eigenvectors, eigenvalues and Bethe equations are given in a general form. Quantum spin chains of spin one derived from the 19-vertex models were also discussed

  11. Multi-Grid Lanczos

    Science.gov (United States)

    Clark, M. A.; Jung, Chulwoo; Lehner, Christoph

    2018-03-01

    We present a Lanczos algorithm utilizing multiple grids that reduces the memory requirements both on disk and in working memory by one order of magnitude for RBC/UKQCD's 48I and 64I ensembles at the physical pion mass. The precision of the resulting eigenvectors is on par with exact deflation.

  12. Robust periodic steady state analysis of autonomous oscillators based on generalized eigenvalues

    NARCIS (Netherlands)

    Mirzavand, R.; Maten, ter E.J.W.; Beelen, T.G.J.; Schilders, W.H.A.; Abdipour, A.

    2011-01-01

    In this paper, we present a new gauge technique for the Newton Raphson method to solve the periodic steady state (PSS) analysis of free-running oscillators in the time domain. To find the frequency a new equation is added to the system of equations. Our equation combines a generalized eigenvector

  13. Robust periodic steady state analysis of autonomous oscillators based on generalized eigenvalues

    NARCIS (Netherlands)

    Mirzavand, R.; Maten, ter E.J.W.; Beelen, T.G.J.; Schilders, W.H.A.; Abdipour, A.; Michielsen, B.; Poirier, J.R.

    2012-01-01

    In this paper, we present a new gauge technique for the Newton Raphson method to solve the periodic steady state (PSS) analysis of free-running oscillators in the time domain. To find the frequency a new equation is added to the system of equations. Our equation combines a generalized eigenvector

  14. Analysis of numerical methods

    CERN Document Server

    Isaacson, Eugene

    1994-01-01

    This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.

  15. A designated centre for people with disabilities operated by Health Service Executive, Donegal

    LENUS (Irish Health Repository)

    Murray, Elizabeth

    2010-10-20

    Abstract Background The past decade has seen considerable interest in the development and evaluation of complex interventions to improve health. Such interventions can only have a significant impact on health and health care if they are shown to be effective when tested, are capable of being widely implemented and can be normalised into routine practice. To date, there is still a problematic gap between research and implementation. The Normalisation Process Theory (NPT) addresses the factors needed for successful implementation and integration of interventions into routine work (normalisation). Discussion In this paper, we suggest that the NPT can act as a sensitising tool, enabling researchers to think through issues of implementation while designing a complex intervention and its evaluation. The need to ensure trial procedures that are feasible and compatible with clinical practice is not limited to trials of complex interventions, and NPT may improve trial design by highlighting potential problems with recruitment or data collection, as well as ensuring the intervention has good implementation potential. Summary The NPT is a new theory which offers trialists a consistent framework that can be used to describe, assess and enhance implementation potential. We encourage trialists to consider using it in their next trial.

  16. The internal rate of return of photovoltaic grid-connected systems. A comprehensive sensitivity analysis

    International Nuclear Information System (INIS)

    Talavera, D.L.; Nofuentes, G.; Aguilera, J.

    2010-01-01

    At present, photovoltaic grid-connected systems (PVGCS) are experiencing a formidable market growth. This is mainly due to a continuous downward trend in PV cost together with some government support programmes launched by many developed countries. However, government bodies and prospective owners/investors are concerned with how changes in existing economic factors - financial incentives and main economic parameters of the PVGCS - that configure a given scenario may affect the profitability of the investment in these systems. Consequently, not only is a mere estimate of the economic profitability in a specific moment required, but also how this profitability may vary according to changes in the existing scenario. In order to enlighten decision-makers and prospective owners/investors of PVGCS, a sensitivity analysis of the internal rate of return (IRR) to some economic factors has been carried out. Three different scenarios have been assumed to represent the three top geographical markets for PV: the Euro area, the USA and Japan. The results obtained in this analysis provide clear evidence that annual loan interest, normalised initial investment subsidy, normalised annual PV electricity yield, PV electricity unitary price and normalised initial investment are ordered from the lowest to the highest impact on the IRR. A short and broad analysis concerning the taxation impact is also provided. (author)

  17. Selection of reference genes for expression studies with fish myogenic cell cultures

    Directory of Open Access Journals (Sweden)

    Johnston Ian A

    2009-08-01

    Full Text Available Abstract Background Relatively few studies have used cell culture systems to investigate gene expression and the regulation of myogenesis in fish. To produce robust data from quantitative real-time PCR mRNA levels need to be normalised using internal reference genes which have stable expression across all experimental samples. We have investigated the expression of eight candidate genes to identify suitable reference genes for use in primary myogenic cell cultures from Atlantic salmon (Salmo salar L.. The software analysis packages geNorm, Normfinder and Best keeper were used to rank genes according to their stability across 42 samples during the course of myogenic differentiation. Results Initial results showed several of the candidate genes exhibited stable expression throughout myogenic culture while Sdha was identified as the least stable gene. Further analysis with geNorm, Normfinder and Bestkeeper identified Ef1α, Hprt1, Ppia and RNApolII as stably expressed. Comparison of data normalised with the geometric average obtained from combinations of any three of these genes showed no significant differences, indicating that any combination of these genes is valid. Conclusion The geometric average of any three of Hprt1, Ef1α, Ppia and RNApolII is suitable for normalisation of gene expression data in primary myogenic cultures from Atlantic salmon.

  18. Selection of reference genes for expression studies with fish myogenic cell cultures.

    Science.gov (United States)

    Bower, Neil I; Johnston, Ian A

    2009-08-10

    Relatively few studies have used cell culture systems to investigate gene expression and the regulation of myogenesis in fish. To produce robust data from quantitative real-time PCR mRNA levels need to be normalised using internal reference genes which have stable expression across all experimental samples. We have investigated the expression of eight candidate genes to identify suitable reference genes for use in primary myogenic cell cultures from Atlantic salmon (Salmo salar L.). The software analysis packages geNorm, Normfinder and Best keeper were used to rank genes according to their stability across 42 samples during the course of myogenic differentiation. Initial results showed several of the candidate genes exhibited stable expression throughout myogenic culture while Sdha was identified as the least stable gene. Further analysis with geNorm, Normfinder and Bestkeeper identified Ef1alpha, Hprt1, Ppia and RNApolII as stably expressed. Comparison of data normalised with the geometric average obtained from combinations of any three of these genes showed no significant differences, indicating that any combination of these genes is valid. The geometric average of any three of Hprt1, Ef1alpha, Ppia and RNApolII is suitable for normalisation of gene expression data in primary myogenic cultures from Atlantic salmon.

  19. Finite Element-Galerkin Approximation of the Eigenvalues of Eigenvectors of Selfadjoint Problems

    Science.gov (United States)

    1988-07-01

    l’ "k, + 1. Combining (3.20), (3.22), and the fact that I-Eh(Ak ) and Ph are orthogonal projections we have I(I-Eh(Xk,)) PhUB 5 Si (I-Eh(xk)) PhT(Ph-I...Its adjoint are equal. (3.23) implies Hf(I-Eh(1kI )Ph)u{1B - P(IPh)UIBI 5 I(I-Eh(Ak )) PhuB -< d i ii ( Ph- I )T II H B_--H,3 1(P h- I ) u liB , and

  20. Simultaneous multigrid techniques for nonlinear eigenvalue problems: Solutions of the nonlinear Schrödinger-Poisson eigenvalue problem in two and three dimensions

    Science.gov (United States)

    Costiner, Sorin; Ta'asan, Shlomo

    1995-07-01

    Algorithms for nonlinear eigenvalue problems (EP's) often require solving self-consistently a large number of EP's. Convergence difficulties may occur if the solution is not sought in an appropriate region, if global constraints have to be satisfied, or if close or equal eigenvalues are present. Multigrid (MG) algorithms for nonlinear problems and for EP's obtained from discretizations of partial differential EP have often been shown to be more efficient than single level algorithms. This paper presents MG techniques and a MG algorithm for nonlinear Schrödinger Poisson EP's. The algorithm overcomes the above mentioned difficulties combining the following techniques: a MG simultaneous treatment of the eigenvectors and nonlinearity, and with the global constrains; MG stable subspace continuation techniques for the treatment of nonlinearity; and a MG projection coupled with backrotations for separation of solutions. These techniques keep the solutions in an appropriate region, where the algorithm converges fast, and reduce the large number of self-consistent iterations to only a few or one MG simultaneous iteration. The MG projection makes it possible to efficiently overcome difficulties related to clusters of close and equal eigenvalues. Computational examples for the nonlinear Schrödinger-Poisson EP in two and three dimensions, presenting special computational difficulties that are due to the nonlinearity and to the equal and closely clustered eigenvalues are demonstrated. For these cases, the algorithm requires O(qN) operations for the calculation of q eigenvectors of size N and for the corresponding eigenvalues. One MG simultaneous cycle per fine level was performed. The total computational cost is equivalent to only a few Gauss-Seidel relaxations per eigenvector. An asymptotic convergence rate of 0.15 per MG cycle is attained.

  1. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    Science.gov (United States)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  2. Electrostatic point charge fitting as an inverse problem: Revealing the underlying ill-conditioning

    International Nuclear Information System (INIS)

    Ivanov, Maxim V.; Talipov, Marat R.; Timerghazin, Qadir K.

    2015-01-01

    Atom-centered point charge (PC) model of the molecular electrostatics—a major workhorse of the atomistic biomolecular simulations—is usually parameterized by least-squares (LS) fitting of the point charge values to a reference electrostatic potential, a procedure that suffers from numerical instabilities due to the ill-conditioned nature of the LS problem. To reveal the origins of this ill-conditioning, we start with a general treatment of the point charge fitting problem as an inverse problem and construct an analytical model with the point charges spherically arranged according to Lebedev quadrature which is naturally suited for the inverse electrostatic problem. This analytical model is contrasted to the atom-centered point-charge model that can be viewed as an irregular quadrature poorly suited for the problem. This analysis shows that the numerical problems of the point charge fitting are due to the decay of the curvatures corresponding to the eigenvectors of LS sum Hessian matrix. In part, this ill-conditioning is intrinsic to the problem and is related to decreasing electrostatic contribution of the higher multipole moments, that are, in the case of Lebedev grid model, directly associated with the Hessian eigenvectors. For the atom-centered model, this association breaks down beyond the first few eigenvectors related to the high-curvature monopole and dipole terms; this leads to even wider spread-out of the Hessian curvature values. Using these insights, it is possible to alleviate the ill-conditioning of the LS point-charge fitting without introducing external restraints and/or constraints. Also, as the analytical Lebedev grid PC model proposed here can reproduce multipole moments up to a given rank, it may provide a promising alternative to including explicit multipole terms in a force field

  3. Equilibrium beam distribution in an electron storage ring near linear synchrobetatron coupling resonances

    Directory of Open Access Journals (Sweden)

    Boaz Nash

    2006-03-01

    Full Text Available Linear dynamics in a storage ring can be described by the one-turn map matrix. In the case of a resonance where two of the eigenvalues of this matrix are degenerate, a coupling perturbation causes a mixing of the uncoupled eigenvectors. A perturbation formalism is developed to find eigenvalues and eigenvectors of the one-turn map near such a linear resonance. Damping and diffusion due to synchrotron radiation can be obtained by integrating their effects over one turn, and the coupled eigenvectors can be used to find the coupled damping and diffusion coefficients. Expressions for the coupled equilibrium emittances and beam distribution moments are then derived. In addition to the conventional instabilities at the sum, integer, and half-integer resonances, it is found that the coupling can cause an instability through antidamping near a sum resonance even when the symplectic dynamics are stable. As one application of this formalism, the case of linear synchrobetatron coupling is analyzed where the coupling is caused by dispersion in the rf cavity, or by a crab cavity. Explicit closed-form expressions for the sum/difference resonances are given along with the integer/half-integer resonances. The integer and half-integer resonances caused by coupling require particular care. We find an example of this with the case of a crab cavity for the integer resonance of the synchrotron tune. Whether or not there is an instability is determined by the value of the horizontal betatron tune, a unique feature of these coupling-caused integer or half-integer resonances. Finally, the coupled damping and diffusion coefficients along with the equilibrium invariants and projected emittances are plotted as a function of the betatron and synchrotron tunes for an example storage ring based on PEP-II.

  4. Exact Finite Differences. The Derivative on Non Uniformly Spaced Partitions

    Directory of Open Access Journals (Sweden)

    Armando Martínez-Pérez

    2017-10-01

    Full Text Available We define a finite-differences derivative operation, on a non uniformly spaced partition, which has the exponential function as an exact eigenvector. We discuss some properties of this operator and we propose a definition for the components of a finite-differences momentum operator. This allows us to perform exact discrete calculations.

  5. Eigenvalue for Densely Defined Perturbations of Multivalued Maximal Monotone Operators in Reflexive Banach Spaces

    Directory of Open Access Journals (Sweden)

    Boubakari Ibrahimou

    2013-01-01

    maximal monotone with and . Using the topological degree theory developed by Kartsatos and Quarcoo we study the eigenvalue problem where the operator is a single-valued of class . The existence of continuous branches of eigenvectors of infinite length then could be easily extended to the case where the operator is multivalued and is investigated.

  6. Radioactivity computation of steady-state and pulsed fusion reactors operation

    International Nuclear Information System (INIS)

    Attaya, H.

    1994-06-01

    Different mathematical methods are used to calculate the nuclear transmutation in steady-state and pulsed neutron irradiation. These methods are the Schuer decomposition, the eigenvector decomposition, and the Pade approximation of the matrix exponential function. In the case of the linear decay chain approximation, a simple algorithm is used to evaluate the transition matrices

  7. Multi-Grid Lanczos

    Directory of Open Access Journals (Sweden)

    Clark M. A.

    2018-01-01

    Full Text Available We present a Lanczos algorithm utilizing multiple grids that reduces the memory requirements both on disk and in working memory by one order of magnitude for RBC/UKQCD’s 48I and 64I ensembles at the physical pion mass. The precision of the resulting eigenvectors is on par with exact deflation.

  8. Universal growth modes of high-elevation conifers

    Czech Academy of Sciences Publication Activity Database

    Datsenko, N. M.; Sonechkin, D. M.; Büntgen, Ulf; Yang, B.

    2016-01-01

    Roč. 38, JUN (2016), s. 38-50 ISSN 1125-7865 Institutional support: RVO:67179843 Keywords : tree-ring chronologies * summer temperature-variations * northeastern tibetan plateau * climate signal * fennoscandian summers * annual precipitation * density * variability * qinghai * Growth modes * Ring width and maximum latewood density * Eigenvector analysis Subject RIV: EF - Botanics Impact factor: 2.259, year: 2016

  9. An Application of the Vandermonde Determinant

    Science.gov (United States)

    Xu, Junqin; Zhao, Likuan

    2006-01-01

    Eigenvalue is an important concept in Linear Algebra. It is well known that the eigenvectors corresponding to different eigenvalues of a square matrix are linear independent. In most of the existing textbooks, this result is proven using mathematical induction. In this note, a new proof using Vandermonde determinant is given. It is shown that this…

  10. Creep Damage Evaluation of Titanium Alloy Using Nonlinear Ultrasonic Lamb Waves

    International Nuclear Information System (INIS)

    Xiang Yan-Xun; Xuan Fu-Zhen; Deng Ming-Xi; Chen Hu; Chen Ding-Yue

    2012-01-01

    The creep damage in high temperature resistant titanium alloys Ti60 is measured using the nonlinear effect of an ultrasonic Lamb wave. The results show that the normalised acoustic nonlinearity of a Lamb wave exhibits a variation of the 'increase-decrease' tendency as a function of the creep damage. The influence of microstructure evolution on the nonlinear Lamb wave propagation has been analyzed based on metallographic studies, which reveal that the normalised acoustic nonlinearity increases due to a rising of the precipitation volume fraction and the dislocation density in the early stage, and it decreases as a combined result of dislocation change and micro-void initiation in the material. The nonlinear Lamb wave exhibits the potential for the assessment of the remaining creep life in metals

  11. ANTICOOL: Simulating positron cooling and annihilation in atomic gases

    Science.gov (United States)

    Green, D. G.

    2018-03-01

    The Fortran program ANTICOOL, developed to simulate positron cooling and annihilation in atomic gases for positron energies below the positronium-formation threshold, is presented. Given positron-atom elastic scattering phase shifts, normalised annihilation rates Zeff, and γ spectra as a function of momentum k, ANTICOOL enables the calculation of the positron momentum distribution f(k , t) as a function of time t, the time-varying normalised annihilation rate Z¯eff(t) , the lifetime spectrum and time-varying annihilation γ spectra. The capability and functionality of the program is demonstrated via a tutorial-style example for positron cooling and annihilation in room temperature helium gas, using accurate scattering and annihilation cross sections and γ spectra calculated using many-body theory as input.

  12. Enhancing yeast transcription analysis through integration of heterogeneous data

    DEFF Research Database (Denmark)

    Grotkjær, Thomas; Nielsen, Jens

    2004-01-01

    of Saccharomyces cerevisiae whole genome transcription data. A special focus is on the quantitative aspects of normalisation and mathematical modelling approaches, since they are expected to play an increasing role in future DNA microarray analysis studies. Data analysis is exemplified with cluster analysis......DNA microarray technology enables the simultaneous measurement of the transcript level of thousands of genes. Primary analysis can be done with basic statistical tools and cluster analysis, but effective and in depth analysis of the vast amount of transcription data requires integration with data...... from several heterogeneous data Sources, such as upstream promoter sequences, genome-scale metabolic models, annotation databases and other experimental data. In this review, we discuss how experimental design, normalisation, heterogeneous data and mathematical modelling can enhance analysis...

  13. Developing Public Policies for New Welfare Technologies – A Case Study of Telemedicine and Telehomecare

    DEFF Research Database (Denmark)

    Tambo, Torben

    2012-01-01

    and communication-based technologies (ICT) for homecare and monitoring (telemedicine, telehomecare). Despite major investments and national commitment, public policies have not yet found a general approach to move from technological and clinical opportunity and into large-scale regular use of the technology...... (normalisation). This article provides two case studies from Denmark; one case with hypertension monitoring at a local level and another case on national policy implementation through funding of selected demonstration projects. Among the findings are that policy-making processes certainly face major challenges...... in capturing research and development for the transition of technologies into working practice. Furthermore, policy approaches of supporting experimentation and demonstration are found inadequate in promoting technology into a level of normalisation in highly cross-organisational operational environments...

  14. SU2 nonstandard bases: the case of mutually unbiased bases

    International Nuclear Information System (INIS)

    Olivier, Albouy; Kibler, Maurice R.

    2007-02-01

    This paper deals with bases in a finite-dimensional Hilbert space. Such a space can be realized as a subspace of the representation space of SU 2 corresponding to an irreducible representation of SU 2 . The representation theory of SU 2 is reconsidered via the use of two truncated deformed oscillators. This leads to replace the familiar scheme [j 2 , j z ] by a scheme [j 2 , v ra ], where the two-parameter operator v ra is defined in the universal enveloping algebra of the Lie algebra su 2 . The eigenvectors of the commuting set of operators [j 2 , v ra ] are adapted to a tower of chains SO 3 includes C 2j+1 (2j belongs to N * ), where C 2j+1 is the cyclic group of order 2j + 1. In the case where 2j + 1 is prime, the corresponding eigenvectors generate a complete set of mutually unbiased bases. Some useful relations on generalized quadratic Gauss sums are exposed in three appendices. (authors)

  15. Extending the eigCG algorithm to nonsymmetric Lanczos for linear systems with multiple right-hand sides

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas

    2014-08-01

    The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.

  16. Tridiagonal realization of the antisymmetric Gaussian β-ensemble

    International Nuclear Information System (INIS)

    Dumitriu, Ioana; Forrester, Peter J.

    2010-01-01

    The Householder reduction of a member of the antisymmetric Gaussian unitary ensemble gives an antisymmetric tridiagonal matrix with all independent elements. The random variables permit the introduction of a positive parameter β, and the eigenvalue probability density function of the corresponding random matrices can be computed explicitly, as can the distribution of (q i ), the first components of the eigenvectors. Three proofs are given. One involves an inductive construction based on bordering of a family of random matrices which are shown to have the same distributions as the antisymmetric tridiagonal matrices. This proof uses the Dixon-Anderson integral from Selberg integral theory. A second proof involves the explicit computation of the Jacobian for the change of variables between real antisymmetric tridiagonal matrices, its eigenvalues, and (q i ). The third proof maps matrices from the antisymmetric Gaussian β-ensemble to those realizing particular examples of the Laguerre β-ensemble. In addition to these proofs, we note some simple properties of the shooting eigenvector and associated Pruefer phases of the random matrices.

  17. Color-based scale-invariant feature detection applied in robot vision

    Science.gov (United States)

    Gao, Jian; Huang, Xinhan; Peng, Gang; Wang, Min; Li, Xinde

    2007-11-01

    The scale-invariant feature detecting methods always require a lot of computation yet sometimes still fail to meet the real-time demands in robot vision fields. To solve the problem, a quick method for detecting interest points is presented. To decrease the computation time, the detector selects as interest points those whose scale normalized Laplacian values are the local extrema in the nonholonomic pyramid scale space. The descriptor is built with several subregions, whose width is proportional to the scale factor, and the coordinates of the descriptor are rotated in relation to the interest point orientation just like the SIFT descriptor. The eigenvector is computed in the original color image and the mean values of the normalized color g and b in each subregion are chosen to be the factors of the eigenvector. Compared with the SIFT descriptor, this descriptor's dimension has been reduced evidently, which can simplify the point matching process. The performance of the method is analyzed in theory in this paper and the experimental results have certified its validity too.

  18. MiRNA-TF-gene network analysis through ranking of biomolecules for multi-informative uterine leiomyoma dataset.

    Science.gov (United States)

    Mallik, Saurav; Maulik, Ujjwal

    2015-10-01

    Gene ranking is an important problem in bioinformatics. Here, we propose a new framework for ranking biomolecules (viz., miRNAs, transcription-factors/TFs and genes) in a multi-informative uterine leiomyoma dataset having both gene expression and methylation data using (statistical) eigenvector centrality based approach. At first, genes that are both differentially expressed and methylated, are identified using Limma statistical test. A network, comprising these genes, corresponding TFs from TRANSFAC and ITFP databases, and targeter miRNAs from miRWalk database, is then built. The biomolecules are then ranked based on eigenvector centrality. Our proposed method provides better average accuracy in hub gene and non-hub gene classifications than other methods. Furthermore, pre-ranked Gene set enrichment analysis is applied on the pathway database as well as GO-term databases of Molecular Signatures Database with providing a pre-ranked gene-list based on different centrality values for comparing among the ranking methods. Finally, top novel potential gene-markers for the uterine leiomyoma are provided. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Ab initio nuclear structure - the large sparse matrix eigenvalue problem

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P; Maris, Pieter [Department of Physics, Iowa State University, Ames, IA, 50011 (United States); Ng, Esmond; Yang, Chao [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Sosonkina, Masha, E-mail: jvary@iastate.ed [Scalable Computing Laboratory, Ames Laboratory, Iowa State University, Ames, IA, 50011 (United States)

    2009-07-01

    The structure and reactions of light nuclei represent fundamental and formidable challenges for microscopic theory based on realistic strong interaction potentials. Several ab initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab initio no core shell model (NCSM) and the no core full configuration (NCFC) method, frame this quantum many-particle problem as a large sparse matrix eigenvalue problem where one evaluates the Hamiltonian matrix in a basis space consisting of many-fermion Slater determinants and then solves for a set of the lowest eigenvalues and their associated eigenvectors. The resulting eigenvectors are employed to evaluate a set of experimental quantities to test the underlying potential. For fundamental problems of interest, the matrix dimension often exceeds 10{sup 10} and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. We survey recent results and advances in solving this large sparse matrix eigenvalue problem. We also outline the challenges that lie ahead for achieving further breakthroughs in fundamental nuclear theory using these ab initio approaches.

  20. Ab initio nuclear structure - the large sparse matrix eigenvalue problem

    International Nuclear Information System (INIS)

    Vary, James P; Maris, Pieter; Ng, Esmond; Yang, Chao; Sosonkina, Masha

    2009-01-01

    The structure and reactions of light nuclei represent fundamental and formidable challenges for microscopic theory based on realistic strong interaction potentials. Several ab initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab initio no core shell model (NCSM) and the no core full configuration (NCFC) method, frame this quantum many-particle problem as a large sparse matrix eigenvalue problem where one evaluates the Hamiltonian matrix in a basis space consisting of many-fermion Slater determinants and then solves for a set of the lowest eigenvalues and their associated eigenvectors. The resulting eigenvectors are employed to evaluate a set of experimental quantities to test the underlying potential. For fundamental problems of interest, the matrix dimension often exceeds 10 10 and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. We survey recent results and advances in solving this large sparse matrix eigenvalue problem. We also outline the challenges that lie ahead for achieving further breakthroughs in fundamental nuclear theory using these ab initio approaches.

  1. Random matrix theory and fund of funds portfolio optimisation

    Science.gov (United States)

    Conlon, T.; Ruskin, H. J.; Crane, M.

    2007-08-01

    The proprietary nature of Hedge Fund investing means that it is common practise for managers to release minimal information about their returns. The construction of a fund of hedge funds portfolio requires a correlation matrix which often has to be estimated using a relatively small sample of monthly returns data which induces noise. In this paper, random matrix theory (RMT) is applied to a cross-correlation matrix C, constructed using hedge fund returns data. The analysis reveals a number of eigenvalues that deviate from the spectrum suggested by RMT. The components of the deviating eigenvectors are found to correspond to distinct groups of strategies that are applied by hedge fund managers. The inverse participation ratio is used to quantify the number of components that participate in each eigenvector. Finally, the correlation matrix is cleaned by separating the noisy part from the non-noisy part of C. This technique is found to greatly reduce the difference between the predicted and realised risk of a portfolio, leading to an improved risk profile for a fund of hedge funds.

  2. SU{sub 2} nonstandard bases: the case of mutually unbiased bases

    Energy Technology Data Exchange (ETDEWEB)

    Olivier, Albouy; Kibler, Maurice R. [Universite de Lyon, Institut de Physique Nucleaire de Lyon, Universite Lyon, CNRS/IN2P3, 43 bd du 11 novembre 1918, F-69622 Villeurbanne Cedex (France)

    2007-02-15

    This paper deals with bases in a finite-dimensional Hilbert space. Such a space can be realized as a subspace of the representation space of SU{sub 2} corresponding to an irreducible representation of SU{sub 2}. The representation theory of SU{sub 2} is reconsidered via the use of two truncated deformed oscillators. This leads to replace the familiar scheme [j{sub 2}, j{sub z}] by a scheme [j{sup 2}, v{sub ra}], where the two-parameter operator v{sub ra} is defined in the universal enveloping algebra of the Lie algebra su{sub 2}. The eigenvectors of the commuting set of operators [j{sup 2}, v{sub ra}] are adapted to a tower of chains SO{sub 3} includes C{sub 2j+1} (2j belongs to N{sup *}), where C{sub 2j+1} is the cyclic group of order 2j + 1. In the case where 2j + 1 is prime, the corresponding eigenvectors generate a complete set of mutually unbiased bases. Some useful relations on generalized quadratic Gauss sums are exposed in three appendices. (authors)

  3. Ossobennosti mezhetnitsheskoi sotsialno-polititsheskoi situatsii v Estonii i osnovnõje puti jejo normalizatsii / Vladimir Parol

    Index Scriptorium Estoniae

    Parol, Vladimir

    2000-01-01

    Kokkuvõte: Etniliste rühmade vahelise sotsiaal-poliitilise situatsiooni iseärasused Eestis ja selle normaliseerimise teed. Summary: Inter-ethnic social-political situation in Estonia, its peculiarities and ways of normalisation

  4. Cleaning the correlation matrix with a denoising autoencoder

    OpenAIRE

    Hayou, Soufiane

    2017-01-01

    In this paper, we use an adjusted autoencoder to estimate the true eigenvalues of the population correlation matrix from the sample correlation matrix when the number of samples is small. We show that the model outperforms the Rotational Invariant Estimator (Bouchaud) which is the optimal estimator in the sample eigenvectors basis when the dimension goes to infinity.

  5. On certain properties of some generalized special functions

    International Nuclear Information System (INIS)

    Pathan, M.A.; Khan, Subuhi

    2002-06-01

    In this paper, we derive a result concerning eigenvector for the product of two operators defined on a Lie algebra of endomorphisms of a vector space. The results given by Radulescu, Mandal and authors follow as special cases of this result. Further using these results, we deduce certain properties of generalized Hermite polynomials and Hermite Tricomi functions. (author)

  6. Singularly Perturbed Equations in the Critical Case.

    Science.gov (United States)

    1980-02-01

    asymptotic properties of the differential equation (1) in the noncritical case (all ReXi (t) ɘ) . We will consider the critical case (k 0) ; the...the inequality (3), that is, ReXi (t,a) < 0 (58) The matrix ca(t,a) , consisting of the eigenvectors corresponding to w 0 , now has the form I (P -(t

  7. An algorithm to compute the square root of 3x3 positive definite matrix

    International Nuclear Information System (INIS)

    Franca, L.P.

    1988-06-01

    An efficient closed form to compute the square root of a 3 x 3 positive definite matrix is presented. The derivation employs the Cayley-Hamilton theorem avoiding calculation of eigenvectors. We show that evaluation of one eigenvalue of the square root matrix is needed and can not be circumvented. The algorithm is robust and efficient. (author) [pt

  8. Optimized Binomial Quantum States of Complex Oscillators with Real Spectrum

    International Nuclear Information System (INIS)

    Zelaya, K D; Rosas-Ortiz, O

    2016-01-01

    Classical and nonclassical states of quantum complex oscillators with real spectrum are presented. Such states are bi-orthonormal superpositions of n +1 energy eigenvectors of the system with binomial-like coefficients. For large values of n these optimized binomial states behave as photon added coherent states when the imaginary part of the potential is cancelled. (paper)

  9. Identification of stable reference genes for quantitative PCR in cells derived from chicken lymphoid organs.

    Science.gov (United States)

    Borowska, D; Rothwell, L; Bailey, R A; Watson, K; Kaiser, P

    2016-02-01

    Quantitative polymerase chain reaction (qPCR) is a powerful technique for quantification of gene expression, especially genes involved in immune responses. Although qPCR is a very efficient and sensitive tool, variations in the enzymatic efficiency, quality of RNA and the presence of inhibitors can lead to errors. Therefore, qPCR needs to be normalised to obtain reliable results and allow comparison. The most common approach is to use reference genes as internal controls in qPCR analyses. In this study, expression of seven genes, including β-actin (ACTB), β-2-microglobulin (B2M), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), β-glucuronidase (GUSB), TATA box binding protein (TBP), α-tubulin (TUBAT) and 28S ribosomal RNA (r28S), was determined in cells isolated from chicken lymphoid tissues and stimulated with three different mitogens. The stability of the genes was measured using geNorm, NormFinder and BestKeeper software. The results from both geNorm and NormFinder were that the three most stably expressed genes in this panel were TBP, GAPDH and r28S. BestKeeper did not generate clear answers because of the highly heterogeneous sample set. Based on these data we will include TBP in future qPCR normalisation. The study shows the importance of appropriate reference gene normalisation in other tissues before qPCR analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. With Reference to Reference Genes: A Systematic Review of Endogenous Controls in Gene Expression Studies.

    Science.gov (United States)

    Chapman, Joanne R; Waldenström, Jonas

    2015-01-01

    The choice of reference genes that are stably expressed amongst treatment groups is a crucial step in real-time quantitative PCR gene expression studies. Recent guidelines have specified that a minimum of two validated reference genes should be used for normalisation. However, a quantitative review of the literature showed that the average number of reference genes used across all studies was 1.2. Thus, the vast majority of studies continue to use a single gene, with β-actin (ACTB) and/or glyceraldehyde 3-phosphate dehydrogenase (GAPDH) being commonly selected in studies of vertebrate gene expression. Few studies (15%) tested a panel of potential reference genes for stability of expression before using them to normalise data. Amongst studies specifically testing reference gene stability, few found ACTB or GAPDH to be optimal, whereby these genes were significantly less likely to be chosen when larger panels of potential reference genes were screened. Fewer reference genes were tested for stability in non-model organisms, presumably owing to a dearth of available primers in less well characterised species. Furthermore, the experimental conditions under which real-time quantitative PCR analyses were conducted had a large influence on the choice of reference genes, whereby different studies of rat brain tissue showed different reference genes to be the most stable. These results highlight the importance of validating the choice of normalising reference genes before conducting gene expression studies.

  11. Projectile penetration into ballistic gelatin.

    Science.gov (United States)

    Swain, M V; Kieser, D C; Shah, S; Kieser, J A

    2014-01-01

    Ballistic gelatin is frequently used as a model for soft biological tissues that experience projectile impact. In this paper we investigate the response of a number of gelatin materials to the penetration of spherical steel projectiles (7 to 11mm diameter) with a range of lower impacting velocities (projectile velocity are found to be linear for all systems above a certain threshold velocity required for initiating penetration. The data for a specific material impacted with different diameter spheres were able to be condensed to a single curve when the penetration depth was normalised by the projectile diameter. When the results are compared with a number of predictive relationships available in the literature, it is found that over the range of projectiles and compositions used, the results fit a simple relationship that takes into account the projectile diameter, the threshold velocity for penetration into the gelatin and a value of the shear modulus of the gelatin estimated from the threshold velocity for penetration. The normalised depth is found to fit the elastic Froude number when this is modified to allow for a threshold impact velocity. The normalised penetration data are found to best fit this modified elastic Froude number with a slope of 1/2 instead of 1/3 as suggested by Akers and Belmonte (2006). Possible explanations for this difference are discussed. © 2013 Published by Elsevier Ltd.

  12. A method of adjusting SUV for injection-acquisition time differences in {sup 18}F-FDG PET Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Laffon, Eric [Hopital du Haut Leveque, CHU de Bordeaux, Pessac (France); Centre de Recherche Cardio-Thoracique, Bordeaux (France); Hopital du Haut-Leveque, Service de Medecine Nucleaire, Pessac (France); Clermont, Henri de [Hopital du Haut Leveque, CHU de Bordeaux, Pessac (France); Marthan, Roger [Hopital du Haut Leveque, CHU de Bordeaux, Pessac (France); Centre de Recherche Cardio-Thoracique, Bordeaux (France)

    2011-11-15

    A time normalisation method of tumour SUVs in {sup 18}F-FDG PET imaging is proposed that has been verified in lung cancer patients. A two-compartment model analysis showed that, when SUV is not corrected for {sup 18}F physical decay (SUV{sub uncorr}), its value is within 5% of its peak value (t = 79 min) between 55 and 110 min after injection, in each individual patient. In 10 patients, each with 1 or more malignant lesions (n = 15), two PET acquisitions were performed within this time delay, and the maximal SUV of each lesion, both corrected and uncorrected, was assessed. No significant difference was found between the two uncorrected SUVs, whereas there was a significant difference between the two corrected ones: mean differences were 0.04 {+-} 0.22 and 3.24 {+-} 0.75 g.ml{sup -1}, respectively (95% confidence intervals). Therefore, a simple normalisation of decay-corrected SUV for time differences after injection is proposed: SUV{sub N} = 1.66*SUV{sub uncorr}, where the factor 1.66 arises from decay correction at t = 79 min. When {sup 18}F-FDG PET imaging is performed within the range 55-110 min after injection, a simple SUV normalisation for time differences after injection has been verified in patients with lung cancer, with a {+-}2.5% relative measurement uncertainty. (orig.)

  13. Enhanced

    Directory of Open Access Journals (Sweden)

    Martin I. Bayala

    2014-06-01

    Full Text Available Land Surface Temperature (LST is a key parameter in the energy balance model. However, the spatial resolution of the retrieved LST from sensors with high temporal resolution is not accurate enough to be used in local-scale studies. To explore the LST–Normalised Difference Vegetation Index relationship potential and obtain thermal images with high spatial resolution, six enhanced image sharpening techniques were assessed: the disaggregation procedure for radiometric surface temperatures (TsHARP, the Dry Edge Quadratic Function, the Difference of Edges (Ts∗DL and three models supported by the relationship of surface temperature and water stress of vegetation (Normalised Difference Water Index, Normalised Difference Infrared Index and Soil wetness index. Energy Balance Station data and in situ measurements were used to validate the enhanced LST images over a mixed agricultural landscape in the sub-humid Pampean Region of Argentina (PRA, during 2006–2010. Landsat Thematic Mapper (TM and Moderate Resolution Imaging Spectroradiometer (EOS-MODIS thermal datasets were assessed for different spatial resolutions (e.g., 960, 720 and 240 m and the performances were compared with global and local TsHARP procedures. Results suggest that the Ts∗DL technique is the most adequate for simulating LST to high spatial resolution over the heterogeneous landscape of a sub-humid region, showing an average root mean square error of less than 1 K.

  14. Bone to pick: the importance of evaluating reference genes for RT-qPCR quantification of gene expression in craniosynostosis and bone-related tissues and cells

    Directory of Open Access Journals (Sweden)

    Yang Xianxian

    2012-05-01

    Full Text Available Abstract Background RT-qPCR is a common tool for quantification of gene expression, but its accuracy is dependent on the choice and stability (steady state expression levels of the reference gene/s used for normalization. To date, in the bone field, there have been few studies to determine the most stable reference genes and, usually, RT-qPCR data is normalised to non-validated reference genes, most commonly GAPDH, ACTB and 18 S rRNA. Here we draw attention to the potential deleterious impact of using classical reference genes to normalise expression data for bone studies without prior validation of their stability. Results Using the geNorm and Normfinder programs, panels of mouse and human genes were assessed for their stability under three different experimental conditions: 1 disease progression of Crouzon syndrome (craniosynostosis in a mouse model, 2 proliferative culture of cranial suture cells isolated from craniosynostosis patients and 3 osteogenesis of a mouse bone marrow stromal cell line. We demonstrate that classical reference genes are not always the most ‘stable’ genes and that gene ‘stability’ is highly dependent on experimental conditions. Selected stable genes, individually or in combination, were then used to normalise osteocalcin and alkaline phosphatase gene expression data during cranial suture fusion in the craniosynostosis mouse model and strategies compared. Strikingly, the expression trends of alkaline phosphatase and osteocalcin varied significantly when normalised to the least stable, the most stable or the three most stable genes. Conclusion To minimise errors in evaluating gene expression levels, analysis of a reference panel and subsequent normalization to several stable genes is strongly recommended over normalization to a single gene. In particular, we conclude that use of single, non-validated “housekeeping” genes such as GAPDH, ACTB and 18 S rRNA, currently a widespread practice by researchers in

  15. Myocardial hypertrophy in the recipient with twin-to-twin transfusion syndrome

    DEFF Research Database (Denmark)

    Jeppesen, D.L.; Jorgensen, F.S.; Pryds, O.A.

    2008-01-01

    pressure measurements revealed persistent systemic hypertension. Biventricular hypertrophy was demonstrated by echocardiography. Blood pressure normalised after treatment with Nifedipine and the cardiac hypertrophy subsided over the following weeks. A potential contributing mechanism is intrauterine...

  16. Levelling-out and register variation in the translations of ...

    African Journals Online (AJOL)

    Kate H

    Explicitation, simplification, normalisation and levelling-out, the four features of translation .... limited amount of attention levelling-out has received, there is consequently an ..... The subcorpus of medical translations is divided into two divisions: ...

  17. Standardization in dust emission measurement; Mesure des emissions de poussieres normalisation

    Energy Technology Data Exchange (ETDEWEB)

    Perret, R. [INERIS, 60 - Verneuil-en-Halatte, (France)

    1996-12-31

    The European Standardization Committee (CEN TC 264WG5) is developing a new reference method for measuring particulate emissions, suitable for concentrations inferior to 20 mg/m{sup 3} and especially for concentrations around 5 mg/m{sup 3}; the measuring method should be applicable to waste incinerator effluents and more generally to industrial effluents. Testing protocols and data analysis have been examined and repeatability and reproducibility issues are discussed

  18. Resolution function normalisation and secondary extinction in neutron triple-axis spectrometry

    International Nuclear Information System (INIS)

    Tindle, G.L.

    1987-01-01

    The theory of resolution correction in triple-axis spectrometry is developed from first principles. It is demonstrated that for ideally imperfect thin crystals the formulation coincides with that introduced initially by Cooper and Nathans and subsequently considered by Dorner. The predicted energy variation of peak Bragg reflectivities of monochromator and analyser crystals in Bragg case scattering is such as to confirm experimental data. In the Laue case to obtain results compatible with experiment one has to invoke theories of secondary extinction. In an attempt to accommodate these observations a new finite threshold model of secondary extinction is proposed which interpolates thin crystals formulas and conventional secondary extinction formulas obtained in the zero threshold limit. (orig.)

  19. Narrative of certitude for uncertainty normalisation regarding biotechnology in international organisations

    OpenAIRE

    Heath , Robert; Proutheau , Stéphanie

    2012-01-01

    International audience; Narrative theory has gained prominence especially as a companion to social construction of reality In matters of regulation and normalization, narratives socially and culturallyconstruct relevant contingencies, uncertainties, values, and decision. Here, decision dynamics pit risk generators, bearers, bearers' advocates, arbiters, researchers and informers as advocates and counter advocates (Palmlund, 2009). the decision-relevant narrative components (actors, themes, sc...

  20. Am I a Woman? The Normalisation of Woman in US History

    Science.gov (United States)

    Schmidt, Sandra J.

    2012-01-01

    The curriculum of US History has improved substantially in its presentation of women over the 40 years since Trecker's 1971 study of US History textbooks. While studies show increased inclusions, they also suggest that women have not yet claimed their own place in the school curriculum. This paper seeks to better understand the woman who is…

  1. THE INSTITUTION OF ACCOUNTING NORMALISATION IN ROMANIA – HISTORY AND PRESENT

    Directory of Open Access Journals (Sweden)

    Aristita Rotila

    2014-07-01

    Full Text Available The institution of accounting normalization at a national level can essentially be as public, private and mixed. On its nature depend the way of accepting/imposing the accounting norms and also the character of these norms, character which can be more or less restrictive. The present article is a study regarding the institution of normalization of accounting in Romania from the beginning (when the process of normalizing the Romanian accounting began to present, following its changes through two stages which have marked the evolution of our country in the second half of the 20th century and beginning of the 21st century: the stage of socialism, having a centralized economy, and the stage of transition to a market economy, which started right after the 1989 Revolution. Within post-revolutionary stage, under the Ministry of Finances, the institution of accounting normalization in Romania, a mixed organism was created, which sums up a large series of “actors” interested in the accounting information and has the role of allowing those actors to involve into the process of normalization, which would let the Romanian accounting normalization pass from an exclusively public approach to a mixed one.

  2. Multi-slice helical CT: biggest pitch does not mean dosimetric bargain

    International Nuclear Information System (INIS)

    Cordoliani, Y.S.

    2000-01-01

    This article explains the point of the definition of the pitch, definition different from the point of view of the constructor and the definition used in the normalisation documents. It is important to dissociate the detection step ( displacement of the way divided by the thickness of the reconstructed hits) and the acquisition step (displacement of the way divided by the product of the number of hits simultaneously achieved by the nominal thickness of hit). This last one is the only step that answers to the step definition, still current in the normalisation documents and it should be the only used by the constructors. As irradiation is inversely proportional to the pitch, the definition of this one becomes a part of radiation protection and explains why different radiation doses can be gotten. (N.C.)

  3. Erythropoietin over-expression protects against diet-induced obesity in mice through increased fat oxidation in muscles

    DEFF Research Database (Denmark)

    Hojman, Pernille; Brolin, Camilla; Gissel, Hanne

    2009-01-01

    patients. Thus we applied the EPO over-expression model to investigate the metabolic effect of EPO in vivo.At 12 weeks, EPO expression resulted in a 23% weight reduction (Pobese mice; thus the mice weighed 21.9+/-0.8 g (control, normal diet,) 21.9+/-1.4 g (EPO, normal diet), 35.......3+/-3.3 g (control, high-fat diet) and 28.8+/-2.6 g (EPO, high-fat diet). Correspondingly, DXA scanning revealed that this was due to a 28% reduction in adipose tissue mass.The decrease in adipose tissue mass was accompanied by a complete normalisation of fasting insulin levels and glucose tolerance......-physiological levels has substantial metabolic effects including protection against diet-induced obesity and normalisation of glucose sensitivity associated with a shift to increased fat metabolism in the muscles....

  4. The immune response is affected for at least three weeks after extensive surgery for ovarian cancer

    DEFF Research Database (Denmark)

    Brøchner, Anne Craveiro; Mikkelsen, Søren; Hegelund, Iørn

    2016-01-01

    INTRODUCTION: The treatment of women with ovarian cancer in advanced stages consists of extensive surgery followed by chemotherapy initiated three weeks after surgery. In this study, selected immune parameters were investigated to elucidate when the immune system is normalised following the opera......INTRODUCTION: The treatment of women with ovarian cancer in advanced stages consists of extensive surgery followed by chemotherapy initiated three weeks after surgery. In this study, selected immune parameters were investigated to elucidate when the immune system is normalised following......, interleukin-10 and the activity and total frequency of natural killer cells were measured. RESULTS: Interleukin-6 and interleukin-10 were significantly elevated immediately after the operation and also after 21 days. The total population of natural killercells and the total activity were reduced. The total...

  5. Unpolarised transverse momentum dependent distribution and fragmentation functions from SIDIS multiplicities

    International Nuclear Information System (INIS)

    Anselmino, M.; Boglione, M.; Gonzalez, H. J.O.; Melis, S.; Prokudin, A.

    2014-01-01

    In this study, the unpolarised transverse momentum dependent distribution and fragmentation functions are extracted from HERMES and COMPASS experimental measurements of SIDIS multiplicities for charged hadron production. The data are grouped into independent bins of the kinematical variables, in which the TMD factorisation is expected to hold. A simple factorised functional form of the TMDs is adopted, with a Gaussian dependence on the intrinsic transverse momentum, which turns out to be quite adequate in shape. HERMES data do not need any normalisation correction, while fits of the COMPASS data much improve with a y-dependent overall normalisation factor. A comparison of the extracted TMDs with previous EMC and JLab data confirms the adequacy of the simple gaussian distributions. The possible role of the TMD evolution is briefly considered

  6. Computation and Evaluation of Features of Surface Electromyogram to Identify the Force of Muscle Contraction and Muscle Fatigue

    Directory of Open Access Journals (Sweden)

    Sridhar P. Arjunan

    2014-01-01

    Full Text Available The relationship between force of muscle contraction and muscle fatigue with six different features of surface electromyogram (sEMG was determined by conducting experiments on thirty-five volunteers. The participants performed isometric contractions at 50%, 75%, and 100% of their maximum voluntary contraction (MVC. Six features were considered in this study: normalised spectral index (NSM5, median frequency, root mean square, waveform length, normalised root mean square (NRMS, and increase in synchronization (IIS index. Analysis of variance (ANOVA and linear regression analysis were performed to determine the significance of the feature with respect to the three factors: muscle force, muscle fatigue, and subject. The results show that IIS index of sEMG had the highest correlation with muscle fatigue and the relationship was statistically significant (P0.05.

  7. Computation and evaluation of features of surface electromyogram to identify the force of muscle contraction and muscle fatigue.

    Science.gov (United States)

    Arjunan, Sridhar P; Kumar, Dinesh K; Naik, Ganesh

    2014-01-01

    The relationship between force of muscle contraction and muscle fatigue with six different features of surface electromyogram (sEMG) was determined by conducting experiments on thirty-five volunteers. The participants performed isometric contractions at 50%, 75%, and 100% of their maximum voluntary contraction (MVC). Six features were considered in this study: normalised spectral index (NSM5), median frequency, root mean square, waveform length, normalised root mean square (NRMS), and increase in synchronization (IIS) index. Analysis of variance (ANOVA) and linear regression analysis were performed to determine the significance of the feature with respect to the three factors: muscle force, muscle fatigue, and subject. The results show that IIS index of sEMG had the highest correlation with muscle fatigue and the relationship was statistically significant (P 0.05).

  8. Direct quantification of rare earth element concentrations in natural waters by ICP-MS

    International Nuclear Information System (INIS)

    Lawrence, Michael G.; Greig, Alan; Collerson, Kenneth D.; Kamber, Balz S.

    2006-01-01

    A direct quadrupole ICP-MS technique has been developed for the analysis of the rare earth elements and yttrium in natural waters. The method has been validated by comparison of the results obtained for the river water reference material SLRS-4 with literature values. The detection limit of the technique was investigated by analysis of serial dilutions of SLRS-4 and revealed that single elements can be quantified at single-digit fg/g concentrations. A coherent normalised rare earth pattern was retained at concentrations two orders of magnitude below natural concentrations for SLRS-4, demonstrating the excellent inter-element accuracy and precision of the method. The technique was applied to the analysis of a diluted mid-salinity estuarine sample, which also displayed a coherent normalised rare earth element pattern, yielding the expected distinctive marine characteristics

  9. Fusion And Inference From Multiple And Massive Disparate Distributed Dynamic Data Sets

    Science.gov (United States)

    2017-07-01

    computational execution together form a comprehensive, widely- applicable paradigm for statistical graph inference. Approved for Public Release; Distribution...always involve challenging empirical modeling and implementation issues. Our project has propelled the mathematical development, statistical design...D. J., and Sussman, D. L., “A limit theorem for scaled eigenvectors of random dot product graphs,” Sankhya A. Mathemat - ical Statistics and

  10. Deformed GOE for systems with a few degrees of freedom in the chaotic regime

    International Nuclear Information System (INIS)

    Hussein, M.S.; Pato, M.P.

    1990-01-01

    New distribution laws for the energy level spacings and the eigenvector amplitudes, appropriate for systems with a few degrees of freedom in the chaotic regime, are derived by conveniently deforming the GOE. The cases of 2X2 and 3X3 matrices are fully worked out. Suggestions concerning the general case of matrices with large dimensions are made. (author)

  11. Deformed GOE for systems with a few degrees of freedom in the chaotic regime

    International Nuclear Information System (INIS)

    Hussein, M.S.; Pato, M.P.

    1990-03-01

    New distribution laws for the energy level spacings and the eigenvector amplitudes, approapriate for systems with a few degrees of freedom in the chaotic regime, are derived by conveniently deforming the GOE. The cases of 2x2 and 3x3 matrices are fully worked out. Suggestions concerning the general case of matrices with large dimensions are made. (author) [pt

  12. On the rational approximation of the bidimensional potentials

    International Nuclear Information System (INIS)

    Niculescu, V.I.R; Catana, D.

    1997-01-01

    In the present letter we introduced a symmetrical bidimensional potential with Woods-Saxon tail. The potential approximation has permitted to replace in the matrix the evaluation of double integration by a product of two integrals. That implied a complexity reduction in the Hamiltonian eigenvalue and eigenvector evaluation. Also, the harmonic bidimensional basis simplifies significantly the evaluation of electric multipole operators. (authors)

  13. Using the Jacobi-Davidson method to obtain the dominant Lambda modes of a nuclear power reactor

    Energy Technology Data Exchange (ETDEWEB)

    Verdu, G. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain)]. E-mail: gverdu@iqn.upv.es; Ginestar, D. [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain); Miro, R. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain); Vidal, V. [Departamento de Sistemas Informaticos y Computacion, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain)

    2005-07-15

    The Jacobi-Davidson method is a modification of Davidson method, which has shown to be very effective to compute the dominant eigenvalues and their corresponding eigenvectors of a large and sparse matrix. This method has been used to compute the dominant Lambda modes of two configurations of Cofrentes nuclear power reactor, showing itself a quite effective method, especially for perturbed configurations.

  14. Direct structural parameter identification by modal test results

    Science.gov (United States)

    Chen, J.-C.; Kuo, C.-P.; Garba, J. A.

    1983-01-01

    A direct identification procedure is proposed to obtain the mass and stiffness matrices based on the test measured eigenvalues and eigenvectors. The method is based on the theory of matrix perturbation in which the correct mass and stiffness matrices are expanded in terms of analytical values plus a modification matrix. The simplicity of the procedure enables real time operation during the structural testing.

  15. Coalgebraising subsequential transducers

    NARCIS (Netherlands)

    H.H. Hansen (Helle); J. Adamek; C.A. Kupke (Clemens)

    2008-01-01

    htmlabstractSubsequential transducers generalise both classic deterministic automata and Mealy/Moore type state machines by combining (input) language recognition with transduction. In this paper we show that normalisation and taking differentials of subsequential transducers and their underlying

  16. Using CUDA Technology for Defining the Stiffness Matrix in the Subspace of Eigenvectors

    Directory of Open Access Journals (Sweden)

    Yu. V. Berchun

    2015-01-01

    Full Text Available The aim is to improve the performance of solving a problem of deformable solid mechanics through the use of GPGPU. The paper describes technologies for computing systems using both a central and a graphics processor and provides motivation for using CUDA technology as the efficient one.The paper also analyses methods to solve the problem of defining natural frequencies and design waveforms, i.e. an iteration method in the subspace. The method includes several stages. The paper considers the most resource-hungry stage, which defines the stiffness matrix in the subspace of eigenforms and gives the mathematical interpretation of this stage.The GPU choice as a computing device is justified. The paper presents an algorithm for calculating the stiffness matrix in the subspace of eigenforms taking into consideration the features of input data. The global stiffness matrix is very sparse, and its size can reach tens of millions. Therefore, it is represented as a set of the stiffness matrices of the single elements of a model. The paper analyses methods of data representation in the software and selects the best practices for GPU computing.It describes the software implementation using CUDA technology to calculate the stiffness matrix in the subspace of eigenforms. Due to the input data nature, it is impossible to use the universal libraries of matrix computations (cuSPARSE and cuBLAS for loading the GPU. For efficient use of GPU resources in the software implementation, the stiffness matrices of elements are built in the block matrices of a special form. The advantages of using shared memory in GPU calculations are described.The transfer to the GPU computations allowed a twentyfold increase in performance (as compared to the multithreaded CPU-implementation on the model of middle dimensions (degrees of freedom about 2 million. Such an acceleration of one stage speeds up defining the natural frequencies and waveforms by the iteration method in a subspace up to times.

  17. Spatially varying coefficient models in real estate: Eigenvector spatial filtering and alternative approaches

    NARCIS (Netherlands)

    Helbich, M; Griffith, D

    2016-01-01

    Real estate policies in urban areas require the recognition of spatial heterogeneity in housing prices to account for local settings. In response to the growing number of spatially varying coefficient models in housing applications, this study evaluated four models in terms of their spatial patterns

  18. Conjugacy classes in the Weyl group admitting a regular eigenvector and integrable hierarchies

    International Nuclear Information System (INIS)

    Delduc, F.; Feher, L.

    1994-10-01

    The classification of the integrable hierarchies in the Drinfeld-Sokolov (DS) approach is studied. The DS construction, originally based on the principal Heisenberg subalgebra of an affine Lie algebra, has been recently generalized to arbitrary graded Heisenberg subalgebras. The graded Heisenberg subalgebras of an untwisted loop algebra l(G) are classified by the conjugacy classes in the Weyl group of G, but a complete classification of the hierarchies obtained from generalized DS reductions is still missing. The main result presented here is the complete list of the graded regular elements of l(G) for G a classical Lie algebra or G 2 , extending previous results on the gl n case. (author). 9 refs., 4 tabs

  19. An EEG-Based Biometric System Using Eigenvector Centrality in Resting State Brain Networks

    NARCIS (Netherlands)

    Fraschini, M.; Hillebrand, A.; Demuru, M.; Didaci, L.; Marcialis, G.L.

    2015-01-01

    Recently, there has been a growing interest in the use of brain activity for biometric systems. However, so far these studies have focused mainly on basic features of the Electroencephalography. In this study we propose an approach based on phase synchronization, to investigate personal distinctive

  20. Eigenvector localization as a tool to study small communities in online social networks

    Czech Academy of Sciences Publication Activity Database

    Slanina, František; Konopásek, Z.

    2010-01-01

    Roč. 13, č. 6 (2010), s. 699-723 ISSN 0219-5259 R&D Projects: GA MŠk OC09078 Institutional research plan: CEZ:AV0Z10100520 Keywords : networks * localization Subject RIV: BE - Theoretical Physics Impact factor: 1.213, year: 2010 http://www.worldscinet.com/acs/13/1306/S0219525910002840.html