Linear Algebraic Method for Non-Linear Map Analysis
International Nuclear Information System (INIS)
Yu, L.; Nash, B.
2009-01-01
We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.
MIDAS: Regionally linear multivariate discriminative statistical mapping.
Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos
2018-07-01
statistical significance of the derived statistic by analytically approximating its null distribution without the need for computationally expensive permutation tests. The proposed framework was extensively validated using simulated atrophy in structural magnetic resonance imaging (MRI) and further tested using data from a task-based functional MRI study as well as a structural MRI study of cognitive performance. The performance of the proposed framework was evaluated against standard voxel-wise general linear models and other information mapping methods. The experimental results showed that MIDAS achieves relatively higher sensitivity and specificity in detecting group differences. Together, our results demonstrate the potential of the proposed approach to efficiently map effects of interest in both structural and functional data. Copyright © 2018. Published by Elsevier Inc.
Quad-copter UAV BLDC Motor Control: Linear v/s non-linear control maps
Directory of Open Access Journals (Sweden)
Deep Parikh
2015-08-01
Full Text Available This paper presents some investigations and comparison of using linear versus non-linear static motor-control maps for the speed control of a BLDC (Brush Less Direct Current motors used in quad-copter UAV (Unmanned Aerial Vehicles. The motor-control map considered here is the inverse of the static map relating motor-speed output to motor-voltage input for a typical out-runner type Brushless DC Motors (BLDCM. Traditionally, quad-copter BLDC motor speed control uses simple linear motor-control map defined by the motor-constant specification. However, practical BLDC motors show non-linear characteristic, particularly when operated across wide operating speed-range as is commonly required in quad-copter UAV flight operations. In this paper, our investigations to compare performance of linear versus non-linear motor-control maps are presented. The investigations cover simulation-based and experimental study of BLDC motor speed control systems for quad-copter vehicle available. First the non-linear map relating rotor RPM to motor voltage for quad-copter BLDC motor is obtained experimentally using an optical speed encoder. The performance of the linear versus non-linear motor-control-maps for the speed control are studied. The investigations also cover study of time-responses for various standard test input-signals e.g. step, ramp and pulse inputs, applied as the reference speed-commands. Also, simple 2-degree of freedom test-bed is developed in our laboratory to help test the open-loop and closed-loop experimental investigations. The non-linear motor-control map is found to perform better in BLDC motor speed tracking control performance and thereby helping achieve better quad-copter roll-angle attitude control.
Mapping Intermediality in Performance
2010-01-01
Mapping Intermediality in Performance benadert het vraagstuk van intermedialiteit met betrekking tot performance (vooral theater) vanuit vijf verschillende invalshoeken: performativiteit en lichaam; tijd en ruimte; digitale cultuur en posthumanisme; netwerken; pedagogiek en praxis. In deze boeiende
Positivity of linear maps under tensor powers
Energy Technology Data Exchange (ETDEWEB)
Müller-Hermes, Alexander, E-mail: muellerh@ma.tum.de; Wolf, Michael M., E-mail: m.wolf@tum.de [Zentrum Mathematik, Technische Universität München, 85748 Garching (Germany); Reeb, David, E-mail: reeb.qit@gmail.com [Zentrum Mathematik, Technische Universität München, 85748 Garching (Germany); Institute for Theoretical Physics, Leibniz Universität Hannover, 30167 Hannover (Germany)
2016-01-15
We investigate linear maps between matrix algebras that remain positive under tensor powers, i.e., under tensoring with n copies of themselves. Completely positive and completely co-positive maps are trivial examples of this kind. We show that for every n ∈ ℕ, there exist non-trivial maps with this property and that for two-dimensional Hilbert spaces there is no non-trivial map for which this holds for all n. For higher dimensions, we reduce the existence question of such non-trivial “tensor-stable positive maps” to a one-parameter family of maps and show that an affirmative answer would imply the existence of non-positive partial transpose bound entanglement. As an application, we show that any tensor-stable positive map that is not completely positive yields an upper bound on the quantum channel capacity, which for the transposition map gives the well-known cb-norm bound. We, furthermore, show that the latter is an upper bound even for the local operations and classical communications-assisted quantum capacity, and that moreover it is a strong converse rate for this task.
Positivity of linear maps under tensor powers
International Nuclear Information System (INIS)
Müller-Hermes, Alexander; Wolf, Michael M.; Reeb, David
2016-01-01
We investigate linear maps between matrix algebras that remain positive under tensor powers, i.e., under tensoring with n copies of themselves. Completely positive and completely co-positive maps are trivial examples of this kind. We show that for every n ∈ ℕ, there exist non-trivial maps with this property and that for two-dimensional Hilbert spaces there is no non-trivial map for which this holds for all n. For higher dimensions, we reduce the existence question of such non-trivial “tensor-stable positive maps” to a one-parameter family of maps and show that an affirmative answer would imply the existence of non-positive partial transpose bound entanglement. As an application, we show that any tensor-stable positive map that is not completely positive yields an upper bound on the quantum channel capacity, which for the transposition map gives the well-known cb-norm bound. We, furthermore, show that the latter is an upper bound even for the local operations and classical communications-assisted quantum capacity, and that moreover it is a strong converse rate for this task
Optimized multiple linear mappings for single image super-resolution
Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo
2017-12-01
Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.
Mappings with closed range and finite dimensional linear spaces
International Nuclear Information System (INIS)
Iyahen, S.O.
1984-09-01
This paper looks at two settings, each of continuous linear mappings of linear topological spaces. In one setting, the domain space is fixed while the range space varies over a class of linear topological spaces. In the second setting, the range space is fixed while the domain space similarly varies. The interest is in when the requirement that the mappings have a closed range implies that the domain or range space is finite dimensional. Positive results are obtained for metrizable spaces. (author)
Linear models for joint association and linkage QTL mapping
Directory of Open Access Journals (Sweden)
Fernando Rohan L
2009-09-01
Full Text Available Abstract Background Populational linkage disequilibrium and within-family linkage are commonly used for QTL mapping and marker assisted selection. The combination of both results in more robust and accurate locations of the QTL, but models proposed so far have been either single marker, complex in practice or well fit to a particular family structure. Results We herein present linear model theory to come up with additive effects of the QTL alleles in any member of a general pedigree, conditional to observed markers and pedigree, accounting for possible linkage disequilibrium among QTLs and markers. The model is based on association analysis in the founders; further, the additive effect of the QTLs transmitted to the descendants is a weighted (by the probabilities of transmission average of the substitution effects of founders' haplotypes. The model allows for non-complete linkage disequilibrium QTL-markers in the founders. Two submodels are presented: a simple and easy to implement Haley-Knott type regression for half-sib families, and a general mixed (variance component model for general pedigrees. The model can use information from all markers. The performance of the regression method is compared by simulation with a more complex IBD method by Meuwissen and Goddard. Numerical examples are provided. Conclusion The linear model theory provides a useful framework for QTL mapping with dense marker maps. Results show similar accuracies but a bias of the IBD method towards the center of the region. Computations for the linear regression model are extremely simple, in contrast with IBD methods. Extensions of the model to genomic selection and multi-QTL mapping are straightforward.
A Parallel Encryption Algorithm Based on Piecewise Linear Chaotic Map
Directory of Open Access Journals (Sweden)
Xizhong Wang
2013-01-01
Full Text Available We introduce a parallel chaos-based encryption algorithm for taking advantage of multicore processors. The chaotic cryptosystem is generated by the piecewise linear chaotic map (PWLCM. The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI. The algorithm is suitable not only for multicore processors but also for the single-processor architecture. The experimental results show that the chaos-based cryptosystem possesses good statistical properties. The parallel algorithm provides much better performance than the serial ones and would be useful to apply in encryption/decryption file with large size or multimedia.
Multimodal Image Alignment via Linear Mapping between Feature Modalities.
Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James
2017-01-01
We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.
Linear Mapping of Numbers onto Space Requires Attention
Anobile, Giovanni; Cicchini, Guido Marco; Burr, David C.
2012-01-01
Mapping of number onto space is fundamental to mathematics and measurement. Previous research suggests that while typical adults with mathematical schooling map numbers veridically onto a linear scale, pre-school children and adults without formal mathematics training, as well as individuals with dyscalculia, show strong compressive,…
Schwarz maps of algebraic linear ordinary differential equations
Sanabria Malagón, Camilo
2017-12-01
A linear ordinary differential equation is called algebraic if all its solution are algebraic over its field of definition. In this paper we solve the problem of finding closed form solution to algebraic linear ordinary differential equations in terms of standard equations. Furthermore, we obtain a method to compute all algebraic linear ordinary differential equations with rational coefficients by studying their associated Schwarz map through the Picard-Vessiot Theory.
Entanglement witnesses arising from exposed positive linear maps
Ha, Kil-Chan; Kye, Seung-Hyeok
2011-01-01
We consider entanglement witnesses arising from positive linear maps which generate exposed extremal rays. We show that every entanglement can be detected by one of these witnesses, and this witness detects a unique set of entanglement among those. Therefore, they provide a minimal set of witnesses to detect all entanglement in a sense. Furthermore, if those maps are indecomposable then they detect large classes of entanglement with positive partial transposes which have nonempty relative int...
Performance test of 100 W linear compressor
Energy Technology Data Exchange (ETDEWEB)
Ko, J; Ko, D. Y.; Park, S. J.; Kim, H. B.; Hong, Y. J.; Yeom, H. K. [Korea Institute of Machinery and Materials, Daejeon(Korea, Republic of)
2013-09-15
In this paper, we present test results of developed 100 W class linear compressor for Stirling-type pulse tube refrigerator. The fabricated linear compressor has dual-opposed configuration, free piston and moving magnet type linear motor. Power transfer, efficiency and required pressure waveform are predicted with designed and measured specifications. In experiments, room temperature test with flow impedance is conducted to evaluate performance of developed linear compressor. Flow impedance is loaded to compressor with metering valve for flow resistance, inertance tube for flow inertance and buffer volumes for flow compliance. Several operating parameters such as input voltage, current, piston displacement and pressure wave are measured for various operating frequency and fixed input current level. Behaviors of dynamics and performance of linear compressor as varying flow impedance are discussed with measured experimental results. The developed linear compressor shows 124 W of input power, 86 % of motor efficiency and 60 % of compressor efficiency at its resonant operating condition.
Exact treatment of mode locking for a piecewise linear map
International Nuclear Information System (INIS)
Ding, E.J.; Hemmer, P.C.
1987-01-01
A piecewise linear map with one discontinuity is studied by analytic means in the two-dimensional parameter space. When the slope of the map is less than unity, periodic orbits are present, and they give the precise symbolic dynamic classification of these. The localization of the periodic domains in parameter space is given by closed expressions. The winding number forms a devil's terrace, a two-dimensional function whose cross sections are complete devil's staircases. In such a cross section the complementary set to the periodic intervals is a Cantor set with dimension D = 0
LPmerge: an R package for merging genetic maps by linear programming.
Endelman, Jeffrey B; Plomion, Christophe
2014-06-01
Consensus genetic maps constructed from multiple populations are an important resource for both basic and applied research, including genome-wide association analysis, genome sequence assembly and studies of evolution. The LPmerge software uses linear programming to efficiently minimize the mean absolute error between the consensus map and the linkage maps from each population. This minimization is performed subject to linear inequality constraints that ensure the ordering of the markers in the linkage maps is preserved. When marker order is inconsistent between linkage maps, a minimum set of ordinal constraints is deleted to resolve the conflicts. LPmerge is on CRAN at http://cran.r-project.org/web/packages/LPmerge. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
Lyapunov exponent and topological entropy plateaus in piecewise linear maps
International Nuclear Information System (INIS)
Botella-Soler, V; Oteo, J A; Ros, J; Glendinning, P
2013-01-01
We consider a two-parameter family of piecewise linear maps in which the moduli of the two slopes take different values. We provide numerical evidence of the existence of some parameter regions in which the Lyapunov exponent and the topological entropy remain constant. Analytical proof of this phenomenon is also given for certain cases. Surprisingly however, the systems with that property are not conjugate as we prove by using kneading theory. (paper)
2D discontinuous piecewise linear map: Emergence of fashion cycles.
Gardini, L; Sushko, I; Matsuyama, K
2018-05-01
We consider a discrete-time version of the continuous-time fashion cycle model introduced in Matsuyama, 1992. Its dynamics are defined by a 2D discontinuous piecewise linear map depending on three parameters. In the parameter space of the map periodicity, regions associated with attracting cycles of different periods are organized in the period adding and period incrementing bifurcation structures. The boundaries of all the periodicity regions related to border collision bifurcations are obtained analytically in explicit form. We show the existence of several partially overlapping period incrementing structures, that is, a novelty for the considered class of maps. Moreover, we show that if the time-delay in the discrete time formulation of the model shrinks to zero, the number of period incrementing structures tends to infinity and the dynamics of the discrete time fashion cycle model converges to those of continuous-time fashion cycle model.
Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.
Choi, Jae-Seok; Kim, Munchurl
2017-03-01
Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower
Linear response formula for piecewise expanding unimodal maps
International Nuclear Information System (INIS)
Baladi, Viviane; Smania, Daniel
2008-01-01
The average R(t) = ∫φdμ t of a smooth function ψ with respect to the SRB measure μ t of a smooth one-parameter family f t of piecewise expanding interval maps is not always Lipschitz (Baladi 2007 Commun. Math. Phys. 275 839–59, Mazzolena 2007 Master's Thesis Rome 2, Tor Vergata). We prove that if f t is tangent to the topological class of f, and if ∂ t f t | t=0 = X circle f, then R(t) is differentiable at zero, and R'(0) coincides with the resummation proposed (Baladi 2007) of the (a priori divergent) series given by Ruelle's conjecture. In fact, we show that t map μ t is differentiable within Radon measures. Linear response is violated if and only if f t is transversal to the topological class of f
Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.
O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E
2018-04-26
Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.
An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm
Directory of Open Access Journals (Sweden)
Yuping Hu
2014-01-01
Full Text Available An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.
Breaking the continuity of a piecewise linear map
Directory of Open Access Journals (Sweden)
Schenke Björn
2012-08-01
Full Text Available Knowledge about the behavior of discontinuous piecewise-linear maps is important for a wide range of applications. An efficient way to investigate the bifurcation structure in 2D parameter spaces of such maps is to detect specific codimension-2 bifurcation points, called organizing centers, and to describe the bifurcation structure in their neighborhood. In this work, we present the organizing centers in the 1D discontinuous piecewise-linear map in the generic form, which can be used as a normal form for these bifurcations in other 1D discontinuous maps with one discontinuity. These organizing centers appear when the continuity of the system function is broken in a fixed point. The type of an organizing center depends on the slopes of the piecewise-linear map. The organizing centers that occur if the slopes have an absolute value smaller than one were already described in previous works, so we concentrate on presenting the organizing centers that occur if one or both slopes have absolute values larger than one. By doing this, we also show that the behavior for each organizing center can be explained using four basic bifurcation scenarios: the period incrementing and the period adding scenarios in the periodic domain, as well as the bandcount incrementing and the bandcount adding scenarios in the chaotic domain. Les connaissances sur le comportement d’applications linéaires par morceaux discontinues sont importantes pour de nombreuses applications. Une méthode puissante pour étudier la structure de bifurcation dans les espaces de paramètre 2D de telles applications est de détecter des points de bifurcation spécifiques de codimension 2, appelés centres organisateurs, et de décrire la structure de bifurcation dans leur voisinage. Dans ce travail, nous présentons les centres organisateurs pour une application linéaire par morceaux discontinue 1D sous forme générique, ce qui peut être utilisé comme une forme normale pour ces
Linear response formula for piecewise expanding unimodal maps
Baladi, Viviane; Smania, Daniel
2008-04-01
The average R(t)=\\int \\varphi\\,\\rmd \\mu_t of a smooth function phiv with respect to the SRB measure μt of a smooth one-parameter family ft of piecewise expanding interval maps is not always Lipschitz (Baladi 2007 Commun. Math. Phys. 275 839-59, Mazzolena 2007 Master's Thesis Rome 2, Tor Vergata). We prove that if ft is tangent to the topological class of f, and if ∂t ft|t = 0 = X circle f, then R(t) is differentiable at zero, and R'(0) coincides with the resummation proposed (Baladi 2007) of the (a priori divergent) series \\sum_{n=0}^\\infty \\int X(y) \\partial_y (\\varphi \\circ f^n)(y)\\,\\rmd \\mu_0(y) given by Ruelle's conjecture. In fact, we show that t map μt is differentiable within Radon measures. Linear response is violated if and only if ft is transversal to the topological class of f.
PROPERTIES OF INTERSTELLAR TURBULENCE FROM GRADIENTS OF LINEAR POLARIZATION MAPS
International Nuclear Information System (INIS)
Burkhart, Blakesley; Lazarian, A.; Gaensler, B. M.
2012-01-01
Faraday rotation of linearly polarized radio signals provides a very sensitive probe of fluctuations in the interstellar magnetic field and ionized gas density resulting from magnetohydrodynamic (MHD) turbulence. We used a set of statistical tools to analyze images of the spatial gradient of linearly polarized radio emission (|∇P|) for both observational data from a test image of the Southern Galactic Plane Survey (SGPS) and isothermal three-dimensional simulations of MHD turbulence. Visually, in both observations and simulations, a complex network of filamentary structures is seen. Our analysis shows that the filaments in |∇P| can be produced both by interacting shocks and random fluctuations characterizing the non-differentiable field of MHD turbulence. The latter dominates for subsonic turbulence, while the former is only present in supersonic turbulence. We show that supersonic and subsonic turbulence exhibit different distributions as well as different morphologies in the maps of |∇P|. Particularly, filaments produced by shocks show a characteristic 'double jump' profile at the sites of shock fronts resulting from delta function-like increases in the density and/or magnetic field, while those produced by subsonic turbulence show a single jump profile. In order to quantitatively characterize these differences, we use the topology tool known as the genus curve as well as the probability distribution function moments of the image distribution. We find that higher values for the moments correspond to cases of |∇P| with larger sonic Mach numbers. The genus analysis of the supersonic simulations of |∇P| reveals a 'swiss cheese' topology, while the subsonic cases have characteristics of a 'clump' topology. Based on the analysis of the genus and the higher order moments, the SGPS test region data have a distribution and morphology that match subsonic- to transonic-type turbulence, which confirms what is now expected for the warm ionized medium.
PROPERTIES OF INTERSTELLAR TURBULENCE FROM GRADIENTS OF LINEAR POLARIZATION MAPS
Energy Technology Data Exchange (ETDEWEB)
Burkhart, Blakesley; Lazarian, A. [Astronomy Department, University of Wisconsin, Madison, 475 N. Charter St., WI 53711 (United States); Gaensler, B. M. [Sydney Institute for Astronomy, School of Physics, University of Sydney, NSW 2006 (Australia)
2012-04-20
Faraday rotation of linearly polarized radio signals provides a very sensitive probe of fluctuations in the interstellar magnetic field and ionized gas density resulting from magnetohydrodynamic (MHD) turbulence. We used a set of statistical tools to analyze images of the spatial gradient of linearly polarized radio emission (|{nabla}P|) for both observational data from a test image of the Southern Galactic Plane Survey (SGPS) and isothermal three-dimensional simulations of MHD turbulence. Visually, in both observations and simulations, a complex network of filamentary structures is seen. Our analysis shows that the filaments in |{nabla}P| can be produced both by interacting shocks and random fluctuations characterizing the non-differentiable field of MHD turbulence. The latter dominates for subsonic turbulence, while the former is only present in supersonic turbulence. We show that supersonic and subsonic turbulence exhibit different distributions as well as different morphologies in the maps of |{nabla}P|. Particularly, filaments produced by shocks show a characteristic 'double jump' profile at the sites of shock fronts resulting from delta function-like increases in the density and/or magnetic field, while those produced by subsonic turbulence show a single jump profile. In order to quantitatively characterize these differences, we use the topology tool known as the genus curve as well as the probability distribution function moments of the image distribution. We find that higher values for the moments correspond to cases of |{nabla}P| with larger sonic Mach numbers. The genus analysis of the supersonic simulations of |{nabla}P| reveals a 'swiss cheese' topology, while the subsonic cases have characteristics of a 'clump' topology. Based on the analysis of the genus and the higher order moments, the SGPS test region data have a distribution and morphology that match subsonic- to transonic-type turbulence, which confirms what is now
Linear, Step by Step Managerial Performance, versus Exponential Performance
Directory of Open Access Journals (Sweden)
George MOLDOVEANU
2011-04-01
Full Text Available The paper proposes the transition from the potential management concept, which its authors approached by determining its dimension (Roşca, Moldoveanu, 2009b, to the linear, step by step performance concept, as an objective result of management process. In this way, we “answer” the theorists and practitioners, who support exponential management performance. The authors, as detractors of the exponential performance, are influenced by the current crisis (Roşca, Moldoveanu, 2009a, by the lack of organizational excellence in many companies, particularly in Romanian ones and also reaching “the finality” in the evolved companies, developed into an uncontrollable speed.
Directional filtering for linear feature enhancement in geophysical maps
Sykes, M.P.; Das, U.C.
2000-01-01
Geophysical maps of data acquired in ground and airborne surveys are extensively used for mineral, groundwater, and petroleum exploration. Lineaments in these maps are often indicative of contacts, basement faulting, and other tectonic features of interest. To aid the interpretation of these maps, a
High performance linear algebra algorithms: An introduction
DEFF Research Database (Denmark)
Gustavson, F.G.; Wasniewski, Jerzy
2006-01-01
his Mini-Symposium consisted of two back to back sessions, each consisting of five presentations, held on the afternoon of Monday, June 21, 2004. A major theme of both sessions was novel data structures for the matrices of dense linear algebra, DLA. Talks one to four of session one all centered...
Performance of the SLAC Linear Collider klystrons
International Nuclear Information System (INIS)
Allen, M.A.; Fowkes, W.R.; Koontz, R.F.; Schwarz, H.D.; Seeman, J.T.; Vlieks, A.E.
1987-01-01
There are now 200 new, high power 5045 klystrons installed on the two-mile Stanford Linear Accelerator. Peak power per klystron averages over 63 MW. Average energy contribution is above 240 MeV per station. Electron beam energy has been measured as high as 53 GeV. Energy instability due to kylstron malfunction is less than 0.2%. The installed klystrons have logged over one million operating hours with close to 20,00 klystron hours cumulative operating time between failures. Data is being accumulated on klystron operation and failure modes with failure signatures starting to become apparent. To date, no wholesale failure modes have surfaced that would impair the SLAC linear Collider (SLC) program
Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm
Foroutan, M.; Zimbelman, J. R.
2017-09-01
Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.
Grassmannians and Gauss maps in piecewise-linear topology
Levitt, Norman
1989-01-01
The book explores the possibility of extending the notions of "Grassmannian" and "Gauss map" to the PL category. They are distinguished from "classifying space" and "classifying map" which are essentially homotopy-theoretic notions. The analogs of Grassmannian and Gauss map defined incorporate geometric and combinatorial information. Principal applications involve characteristic class theory, smoothing theory, and the existence of immersion satifying certain geometric criteria, e.g. curvature conditions. The book assumes knowledge of basic differential topology and bundle theory, including Hirsch-Gromov-Phillips theory, as well as the analogous theories for the PL category. The work should be of interest to mathematicians concerned with geometric topology, PL and PD aspects of differential geometry and the geometry of polyhedra.
High-resolution mapping of linear antibody epitopes using ultrahigh-density peptide microarrays
DEFF Research Database (Denmark)
Buus, Søren; Rockberg, Johan; Forsström, Björn
2012-01-01
Antibodies empower numerous important scientific, clinical, diagnostic, and industrial applications. Ideally, the epitope(s) targeted by an antibody should be identified and characterized, thereby establishing antibody reactivity, highlighting possible cross-reactivities, and perhaps even warning...... against unwanted (e.g. autoimmune) reactivities. Antibodies target proteins as either conformational or linear epitopes. The latter are typically probed with peptides, but the cost of peptide screening programs tends to prohibit comprehensive specificity analysis. To perform high-throughput, high......-resolution mapping of linear antibody epitopes, we have used ultrahigh-density peptide microarrays generating several hundred thousand different peptides per array. Using exhaustive length and substitution analysis, we have successfully examined the specificity of a panel of polyclonal antibodies raised against...
Fast non-linear extraction of plasma equilibrium parameters using a neural network mapping
International Nuclear Information System (INIS)
Lister, J.B.; Schnurrenberger, H.
1990-07-01
The shaping of non-circular plasmas requires a non-linear mapping between the measured diagnostic signals and selected equilibrium parameters. The particular configuration of Neural Network known as the multi-layer perceptron provides a powerful and general technique for formulating an arbitrary continuous non-linear multi-dimensional mapping. This technique has been successfully applied to the extraction of equilibrium parameters from measurements of single-null diverted plasmas in the DIII-D tokamak; the results are compared with a purely linear mapping. The method is promising, and hardware implementation is straightforward. (author) 15 refs., 7 figs
Fast non-linear extraction of plasma equilibrium parameters using a neural network mapping
International Nuclear Information System (INIS)
Lister, J.B.; Schnurrenberger, H.
1991-01-01
The shaping of non-circular plasmas requires a non-linear mapping between the measured diagnostic signals and selected equilibrium parameters. The particular configuration of neural network known as the multilayer perceptron provides a powerful and general technique for formulating an arbitrary continuous non-linear multi-dimensional mapping. This technique has been successfully applied to the extraction of equilibrium parameters from measurements of single-null diverted plasmas in the DIII-D tokamak; the results are compared with a purely linear mapping. The method is promising, and hardware implementation is straightforward. (author). 17 refs, 8 figs, 2 tab
Sparse PDF maps for non-linear multi-resolution image operations
Hadwiger, Markus; Sicat, Ronell Barrera; Beyer, Johanna; Krü ger, Jens J.; Mö ller, Torsten
2012-01-01
feasible for gigapixel images, while enabling direct evaluation of a variety of non-linear operators from the same representation. We illustrate this versatility for antialiased color mapping, O(n) local Laplacian filters, smoothed local histogram filters
Linear Einstein equations and Kerr-Schild maps
International Nuclear Information System (INIS)
Gergely, Laszlo A
2002-01-01
We prove that given a solution of the Einstein equations g ab for the matter field T ab , an autoparallel null vector field l a and a solution (l a l c , T ac ) of the linearized Einstein equation on the given background, the Kerr-Schild metric g ac + λl a l c (λ arbitrary constant) is an exact solution of the Einstein equation for the energy-momentum tensor T ac + λT ac + λ 2 l (a T c)b l b . The mixed form of the Einstein equation for Kerr-Schild metrics with autoparallel null congruence is also linear. Some more technical conditions hold when the null congruence is not autoparallel. These results generalize previous theorems for vacuum due to Xanthopoulos and for flat seed spacetime due to Guerses and Guersey
Effect of Linear and Non-linear Resistance Exercise on Anaerobic Performance among Young Women
Homa Esmaeili; Ali Reza Amani; Taher Afsharnezhad
2015-01-01
The main goals of strength training are improving muscle strength, power and muscle endurance. The objective of the current study is to compare two popular linear and nonlinear resistance exercises interventions on the anaerobic power. Previous research has shown differences intervention by the linear and non-linear resistance exercise in performance and strength in male athletes. By the way there are not enough data regarding female subjects. Eighteen young women subjects participated in th...
Global Linear Representations of Nonlinear Systems and the Adjoint Map
Banks, S.P.
1988-01-01
In this paper we shall study the global linearization of nonlinear systems on a manifold by two methods. The first consists of an expansion of the vector field in the space of square integrable vector fields. In the second method we use the adjoint representation of the Lie algebra vector fields to obtain an infinite-dimensional matrix representation of the system. A connection between the two approaches will be developed.
Playa: High-Performance Programmable Linear Algebra
Directory of Open Access Journals (Sweden)
Victoria E. Howle
2012-01-01
Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.
High performance computing in linear control
International Nuclear Information System (INIS)
Datta, B.N.
1993-01-01
Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments
Contractive maps on normed linear spaces and their applications to nonlinear matrix equations.
Reurings, M.C.B.
2017-01-01
In this paper the author gives necessary and sufficient conditions under which a map is a contraction on a certain subset of a normed linear space. These conditions are already well known for maps on intervals in R. Using the conditions and Banach's fixed point theorem a fixed point theorem can be
Transitions from phase-locked dynamics to chaos in a piecewise-linear map
DEFF Research Database (Denmark)
Zhusubaliyev, Z.T.; Mosekilde, Erik; De, S.
2008-01-01
place via border-collision fold bifurcations. We examine the transition to chaos through torus destruction in such maps. Considering a piecewise-linear normal-form map we show that this transition, by virtue of the interplay of border-collision bifurcations with period-doubling and homoclinic...
Linearly Recurrent Circle Map Subshifts and an Application to Schrödinger Operators
Adamczewski, B
2001-01-01
We discuss circle map sequences and subshifts generated by them. We give a characterization of those sequences among them which are linearly recurrent. As an application we deduce zero-measure spectrum for a class of discrete one-dimensional Schrödinger operators with potentials generated by circle maps.
Monopole and dipole estimation for multi-frequency sky maps by linear regression
Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W.
2017-01-01
We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
Field size dependent mapping of medical linear accelerator radiation leakage
International Nuclear Information System (INIS)
Vu Bezin, Jérémi; De Vathaire, Florent; Diallo, Ibrahima; Veres, Attila; Lefkopoulos, Dimitri; Chavaudra, Jean; Deutsch, Eric
2015-01-01
The purpose of this study was to investigate the suitability of a graphics library based model for the assessment of linear accelerator radiation leakage. Transmission through the shielding elements was evaluated using the build-up factor corrected exponential attenuation law and the contribution from the electron guide was estimated using the approximation of a linear isotropic radioactive source. Model parameters were estimated by a fitting series of thermoluminescent dosimeter leakage measurements, achieved up to 100 cm from the beam central axis along three directions. The distribution of leakage data at the patient plane reflected the architecture of the shielding elements. Thus, the maximum leakage dose was found under the collimator when only one jaw shielded the primary beam and was about 0.08% of the dose at isocentre. Overall, we observe that the main contributor to leakage dose according to our model was the electron beam guide. Concerning the discrepancies between the measurements used to calibrate the model and the calculations from the model, the average difference was about 7%. Finally, graphics library modelling is a readily and suitable way to estimate leakage dose distribution on a personal computer. Such data could be useful for dosimetric evaluations in late effect studies. (paper)
From the SLAC linear collider to the next linear collider: A status report and road map
International Nuclear Information System (INIS)
Richter, B.
1992-02-01
In this presentation, I will review what we have learned about linear colliders, the problems that have been uncovered, and the technology-development program aimed at realizing the next high energy machine. I will then close with a few comments on how to get on with the job of building it
The structure of mode-locking regions of piecewise-linear continuous maps: II. Skew sawtooth maps
Simpson, D. J. W.
2018-05-01
In two-parameter bifurcation diagrams of piecewise-linear continuous maps on , mode-locking regions typically have points of zero width known as shrinking points. Near any shrinking point, but outside the associated mode-locking region, a significant proportion of parameter space can be usefully partitioned into a two-dimensional array of annular sectors. The purpose of this paper is to show that in these sectors the dynamics is well-approximated by a three-parameter family of skew sawtooth circle maps, where the relationship between the skew sawtooth maps and the N-dimensional map is fixed within each sector. The skew sawtooth maps are continuous, degree-one, and piecewise-linear, with two different slopes. They approximate the stable dynamics of the N-dimensional map with an error that goes to zero with the distance from the shrinking point. The results explain the complicated radial pattern of periodic, quasi-periodic, and chaotic dynamics that occurs near shrinking points.
A characterization of positive linear maps and criteria of entanglement for quantum states
International Nuclear Information System (INIS)
Hou Jinchuan
2010-01-01
Let H and K be (finite- or infinite-dimensional) complex Hilbert spaces. A characterization of positive completely bounded normal linear maps from B(H) into B(K) is given, which particularly gives a characterization of positive elementary operators including all positive linear maps between matrix algebras. This characterization is then applied to give a representation of quantum channels (operations) between infinite-dimensional systems. A necessary and sufficient criterion of separability is given which shows that a state ρ on HxK is separable if and only if (ΦxI)ρ ≥ 0 for all positive finite-rank elementary operators Φ. Examples of NCP and indecomposable positive linear maps are given and are used to recognize some entangled states that cannot be recognized by the PPT criterion and the realignment criterion.
A characterization of positive linear maps and criteria of entanglement for quantum states
Hou, Jinchuan
2010-09-01
Let H and K be (finite- or infinite-dimensional) complex Hilbert spaces. A characterization of positive completely bounded normal linear maps from {\\mathcal B}(H) into {\\mathcal B}(K) is given, which particularly gives a characterization of positive elementary operators including all positive linear maps between matrix algebras. This characterization is then applied to give a representation of quantum channels (operations) between infinite-dimensional systems. A necessary and sufficient criterion of separability is given which shows that a state ρ on HotimesK is separable if and only if (ΦotimesI)ρ >= 0 for all positive finite-rank elementary operators Φ. Examples of NCP and indecomposable positive linear maps are given and are used to recognize some entangled states that cannot be recognized by the PPT criterion and the realignment criterion.
Sparse PDF maps for non-linear multi-resolution image operations
Hadwiger, Markus
2012-11-01
We introduce a new type of multi-resolution image pyramid for high-resolution images called sparse pdf maps (sPDF-maps). Each pyramid level consists of a sparse encoding of continuous probability density functions (pdfs) of pixel neighborhoods in the original image. The encoded pdfs enable the accurate computation of non-linear image operations directly in any pyramid level with proper pre-filtering for anti-aliasing, without accessing higher or lower resolutions. The sparsity of sPDF-maps makes them feasible for gigapixel images, while enabling direct evaluation of a variety of non-linear operators from the same representation. We illustrate this versatility for antialiased color mapping, O(n) local Laplacian filters, smoothed local histogram filters (e.g., median or mode filters), and bilateral filters. © 2012 ACM.
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.
de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo
2018-03-01
Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.
On Performance of Linear Multiuser Detectors for Wireless Multimedia Applications
Agarwal, Rekha; Reddy, B. V. R.; Bindu, E.; Nayak, Pinki
In this paper, performance of different multi-rate schemes in DS-CDMA system is evaluated. The analysis of multirate linear multiuser detectors with multiprocessing gain is analyzed for synchronous Code Division Multiple Access (CDMA) systems. Variable data rate is achieved by varying the processing gain. Our conclusion is that bit error rate for multirate and single rate systems can be made same with a tradeoff with number of users in linear multiuser detectors.
The Effect of Using Concept Maps in Elementary Linear Algebra Course on Students’ Learning
Syarifuddin, H.
2018-04-01
This paper presents the results of a classroom action research that was done in Elementary Linear Algebra course at Universitas Negeri Padang. The focus of the research want to see the effect of using concept maps in the course on students’ learning. Data in this study were collected through classroom observation, students’ reflective journal and concept maps that were created by students. The result of the study was the using of concept maps in Elementary Linera Algebra course gave positive effect on students’ learning.
On Some Isomorphisms between Bounded Linear Maps and Non-Commutative Lp-Spaces
Directory of Open Access Journals (Sweden)
E. J. Atto
2014-04-01
Full Text Available We define a particular space of bounded linear maps using a Von Neumann algebra and some operator spaces. By this, we prove some isomorphisms, and using interpolation in some particular cases, we get analogue of non-commutative Lp spaces.
CRESST Human Performance Knowledge Mapping System
National Research Council Canada - National Science Library
Chung, Gregory K; Michiuye, Joanne K; Brill, David G; Sinha, Ravi; Saadat, Farzad; de Vries, Linda F; Delacruz, Girlie C; Bewley, William L; Baker, Eva L
2002-01-01
.... While several tools exist that are available to construct knowledge maps, CRESST's knowledge mapping tool is one of the only systems designed specifically for assessment purposes, the only system...
CRESST Human Performance Knowledge Mapping System
National Research Council Canada - National Science Library
Chung, Gregory K; Michiuye, Joanne K; Brill, David G; Sinha, Ravi; Saadat, Farzad; de Vries, Linda F; Delacruz, Girlie C; Bewley, William L; Baker, Eva L
2002-01-01
.... This report presents a review of knowledge mapping scoring methods and current online mapping systems, and the overall design, functionality, scoring, usability testing, and authoring capabilities of the CRESST system...
Modelling and measurement of a moving magnet linear compressor performance
International Nuclear Information System (INIS)
Liang, Kun; Stone, Richard; Davies, Gareth; Dadd, Mike; Bailey, Paul
2014-01-01
A novel moving magnet linear compressor with clearance seals and flexure bearings has been designed and constructed. It is suitable for a refrigeration system with a compact heat exchanger, such as would be needed for CPU cooling. The performance of the compressor has been experimentally evaluated with nitrogen and a mathematical model has been developed to evaluate the performance of the linear compressor. The results from the compressor model and the measurements have been compared in terms of cylinder pressure, the ‘P–V’ loop, stroke, mass flow rate and shaft power. The cylinder pressure was not measured directly but was derived from the compressor dynamics and the motor magnetic force characteristics. The comparisons indicate that the compressor model is well validated and can be used to study the performance of this type of compressor, to help with design optimization and the identification of key parameters affecting the system transients. The electrical and thermodynamic losses were also investigated, particularly for the design point (stroke of 13 mm and pressure ratio of 3.0), since a full understanding of these can lead to an increase in compressor efficiency. - Highlights: • Model predictions of the performance of a novel moving magnet linear compressor. • Prototype linear compressor performance measurements using nitrogen. • Reconstruction of P–V loops using a model of the dynamics and electromagnetics. • Close agreement between the model and measurements for the P–V loops. • The design point motor efficiency was 74%, with potential improvements identified
Performances Of Estimators Of Linear Models With Autocorrelated ...
African Journals Online (AJOL)
The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...
Performances of estimators of linear auto-correlated error model ...
African Journals Online (AJOL)
The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...
Directory of Open Access Journals (Sweden)
Maria Joita
2007-12-01
Full Text Available In this paper we characterize the order relation on the set of all nondegenerate completely n-positive linear maps between C*-algebras in terms of a self-dual Hilbert module induced by each completely n-positive linear map.
Design and performance of the Stanford Linear Collider Control System
International Nuclear Information System (INIS)
Melen, R.E.
1984-10-01
The success of the Stanford Linear Collider (SLC) will be dependent upon the implementation of a very large advanced computer-based instrumentation and control system. This paper describes the architectural design of this system as well as a critique of its performance. This critique is based on experience obtained from its use in the control and monitoring of 1/3 of the SLAC linac and in support of an expensive experimental machine physics experimental program. 11 references, 3 figures
Mapping of linear antibody epitopes of the glycoprotein of VHSV, a salmonid rhabdovirus
DEFF Research Database (Denmark)
Fernandez-Alonso, M.; Lorenzo, G.; Perez, L.
1998-01-01
antibodies (MAbs), only 2 non-neutralizing MAbs, I10 (aa 139-153) and IP1H3 (aa 399-413), could be mapped to specific peptides in the pepscan of the gpG. Mapping of these MAbs was confirmed by immunoblotting with recombinant proteins and/or other synthetic peptides covering those sequences. None......Antibody Linear epitopes of the glycoprotein G (gpG) of the viral haemorrhagic septicaemia virus (VHSV), a rhabdovirus of salmonids, were mapped by pepscan using overlapping 15-mer peptides covering the entire gpG sequence and ELISA with polyclonal and monoclonal murine and polyclonal trout...... antibodies. Among the regions recognized in the pepscan by the polyclonal antibodies (PAbs) were the previously identified phosphatidylserine binding heptad-repeats (Estepa & Coll 1996; Virology 216:60-70) and leucocyte stimulating peptides (Lorenzo et al. 1995; Virology 212:348-355). Among 17 monoclonal...
International Nuclear Information System (INIS)
Yu Shao-De; Wu Shi-Bin; Xie Yao-Qin; Wang Hao-Yu; Wei Xin-Hua; Chen Xin; Pan Wan-Long; Hu Jiani
2015-01-01
Similarity coefficient mapping (SCM) aims to improve the morphological evaluation of weighted magnetic resonance imaging However, how to interpret the generated SCM map is still pending. Moreover, is it probable to extract tissue dissimilarity messages based on the theory behind SCM? The primary purpose of this paper is to address these two questions. First, the theory of SCM was interpreted from the perspective of linear fitting. Then, a term was embedded for tissue dissimilarity information. Finally, our method was validated with sixteen human brain image series from multi-echo . Generated maps were investigated from signal-to-noise ratio (SNR) and perceived visual quality, and then interpreted from intra- and inter-tissue intensity. Experimental results show that both perceptibility of anatomical structures and tissue contrast are improved. More importantly, tissue similarity or dissimilarity can be quantified and cross-validated from pixel intensity analysis. This method benefits image enhancement, tissue classification, malformation detection and morphological evaluation. (paper)
How to Use Linear Programming for Information System Performances Optimization
Directory of Open Access Journals (Sweden)
Hell Marko
2014-09-01
Full Text Available Background: Organisations nowadays operate in a very dynamic environment, and therefore, their ability of continuously adjusting the strategic plan to the new conditions is a must for achieving their strategic objectives. BSC is a well-known methodology for measuring performances enabling organizations to learn how well they are doing. In this paper, “BSC for IS” will be proposed in order to measure the IS impact on the achievement of organizations’ business goals. Objectives: The objective of this paper is to present the original procedure which is used to enhance the BSC methodology in planning the optimal targets of IS performances value in order to maximize the organization's effectiveness. Methods/Approach: The method used in this paper is the quantitative methodology - linear programming. In the case study, linear programming is used for optimizing organization’s strategic performance. Results: Results are shown on the example of a case study national park. An optimal performance value for the strategic objective has been calculated, as well as an optimal performance value for each DO (derived objective. Results are calculated in Excel, using Solver Add-in. Conclusions: The presentation of methodology through the case study of a national park shows that this methodology, though it requires a high level of formalisation, provides a very transparent performance calculation.
Performance analysis of different database in new internet mapping system
Yao, Xing; Su, Wei; Gao, Shuai
2017-03-01
In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.
IMPROVING THE PERFORMANCE OF THE LINEAR SYSTEMS SOLVERS USING CUDA
Directory of Open Access Journals (Sweden)
BOGDAN OANCEA
2012-05-01
Full Text Available Parallel computing can offer an enormous advantage regarding the performance for very large applications in almost any field: scientific computing, computer vision, databases, data mining, and economics. GPUs are high performance many-core processors that can obtain very high FLOP rates. Since the first idea of using GPU for general purpose computing, things have evolved and now there are several approaches to GPU programming: CUDA from NVIDIA and Stream from AMD. CUDA is now a popular programming model for general purpose computations on GPU for C/C++ programmers. A great number of applications were ported to CUDA programming model and they obtain speedups of orders of magnitude comparing to optimized CPU implementations. In this paper we present an implementation of a library for solving linear systems using the CCUDA framework. We present the results of performance tests and show that using GPU one can obtain speedups of about of approximately 80 times comparing with a CPU implementation.
Automated, non-linear registration between 3-dimensional brain map and medical head image
International Nuclear Information System (INIS)
Mizuta, Shinobu; Urayama, Shin-ichi; Zoroofi, R.A.; Uyama, Chikao
1998-01-01
In this paper, we propose an automated, non-linear registration method between 3-dimensional medical head image and brain map in order to efficiently extract the regions of interest. In our method, input 3-dimensional image is registered into a reference image extracted from a brain map. The problems to be solved are automated, non-linear image matching procedure, and cost function which represents the similarity between two images. Non-linear matching is carried out by dividing the input image into connected partial regions, transforming the partial regions preserving connectivity among the adjacent images, evaluating the image similarity between the transformed regions of the input image and the correspondent regions of the reference image, and iteratively searching the optimal transformation of the partial regions. In order to measure the voxelwise similarity of multi-modal images, a cost function is introduced, which is based on the mutual information. Some experiments using MR images presented the effectiveness of the proposed method. (author)
Performing Mimetic Mapping: A Non-Visualisable Map of the Suzhou River Area of Shanghai
Directory of Open Access Journals (Sweden)
Anastasia Karandinou
2014-07-01
Full Text Available This paper questions issues concerning the mapping of experience, through the concept of mimesis – the creative re-performance of the site experience onto the map. The place mapped is the Suzhou River area, a significant part of Shanghai, the former boundary between the British and American Settlements, and an ever-changing and transforming territory. Through the detailed description of the mapping processes, we analyse the position of this particular map within contemporary discourse about mapping. Here, we question the purpose of the process, the desired outcome, the consciousness of the significance of each step/event, and the possible significance of the final traces that the mapping leaves behind. Although after the mapping had been carried out, the procedure was analysed, post-rationalised, and justified through its partial documentation (as part of an educational process, this paper questions the way and the reason for these practices (the post-rationalising of the mapping activity, justifying the strategy, etc., and their possible meaning, purpose, demand or context. Thus we conclude that the subject matter is not the final outcome of an object or ‘map’; there is no final map to be exhibited. What this paper brings forth is the mapping as an event, an action performed by the embodied experience of the actual place and by the trans-local materiality of the tools and elements involved in the process of its making.
International Nuclear Information System (INIS)
Romeijn, H Edwin; Ahuja, Ravindra K; Dempsey, James F; Kumar, Arvind; Li, Jonathan G
2003-01-01
We present a novel linear programming (LP) based approach for efficiently solving the intensity modulated radiation therapy (IMRT) fluence-map optimization (FMO) problem to global optimality. Our model overcomes the apparent limitations of a linear-programming approach by approximating any convex objective function by a piecewise linear convex function. This approach allows us to retain the flexibility offered by general convex objective functions, while allowing us to formulate the FMO problem as a LP problem. In addition, a novel type of partial-volume constraint that bounds the tail averages of the differential dose-volume histograms of structures is imposed while retaining linearity as an alternative approach to improve dose homogeneity in the target volumes, and to attempt to spare as many critical structures as possible. The goal of this work is to develop a very rapid global optimization approach that finds high quality dose distributions. Implementation of this model has demonstrated excellent results. We found globally optimal solutions for eight 7-beam head-and-neck cases in less than 3 min of computational time on a single processor personal computer without the use of partial-volume constraints. Adding such constraints increased the running times by a factor of 2-3, but improved the sparing of critical structures. All cases demonstrated excellent target coverage (>95%), target homogeneity (<10% overdosing and <7% underdosing) and organ sparing using at least one of the two models
Crime clocks and target performance maps
CSIR Research Space (South Africa)
Cooper, Antony K
1999-12-01
Full Text Available the period of analysis. Each segment of a pie chart represents a selected part of the day (eg: a two- or three-hour period) or a day of the week. The first and last segments in the day or week are then adjacent, ensuring that there is no artificial break... clocks We have also used crime clocks to map the proportion of crimes that occur during normal police working hours (07:00 to 16:00, Monday to Friday, in the case of the Johannesburg Area), against those that occur outside these hours. 3. Target...
Linear maps preserving maximal deviation and the Jordan structure of quantum systems
International Nuclear Information System (INIS)
Hamhalter, Jan
2012-01-01
In the algebraic approach to quantum theory, a quantum observable is given by an element of a Jordan algebra and a state of the system is modelled by a normalized positive functional on the underlying algebra. Maximal deviation of a quantum observable is the largest statistical deviation one can obtain in a particular state of the system. The main result of the paper shows that each linear bijective transformation between JBW algebras preserving maximal deviations is formed by a Jordan isomorphism or a minus Jordan isomorphism perturbed by a linear functional multiple of an identity. It shows that only one numerical statistical characteristic has the power to determine the Jordan algebraic structure completely. As a consequence, we obtain that only very special maps can preserve the diameter of the spectra of elements. Nonlinear maps preserving the pseudometric given by maximal deviation are also described. The results generalize hitherto known theorems on preservers of maximal deviation in the case of self-adjoint parts of von Neumann algebras proved by Molnár.
Akaishi, A; Shudo, A
2009-12-01
We investigate the stickiness of the two-dimensional piecewise linear map with a family of marginal unstable periodic orbits (FMUPOs), and show that a series of unstable periodic orbits accumulating to FMUPOs plays a significant role to give rise to the power law correlation of trajectories. We can explicitly specify the sticky zone in which unstable periodic orbits whose stability increases algebraically exist, and find that there exists a hierarchy in accumulating periodic orbits. In particular, the periodic orbits with linearly increasing stability play the role of fundamental cycles as in the hyperbolic systems, which allows us to apply the method of cycle expansion. We also study the recurrence time distribution, especially discussing the position and size of the recurrence region. Following the definition adopted in one-dimensional maps, we show that the recurrence time distribution has an exponential part in the short time regime and an asymptotic power law part. The analysis on the crossover time T(c)(*) between these two regimes implies T(c)(*) approximately -log[micro(R)] where micro(R) denotes the area of the recurrence region.
The CMS Magnetic Field Map Performance
Klyukhin, V.I.; Andreev, V.; Ball, A.; Cure, B.; Herve, A.; Gaddi, A.; Gerwig, H.; Karimaki, V.; Loveless, R.; Mulders, M.; Popescu, S.; Sarycheva, L.I.; Virdee, T.
2010-04-05
The Compact Muon Solenoid (CMS) is a general-purpose detector designed to run at the highest luminosity at the CERN Large Hadron Collider (LHC). Its distinctive featuresinclude a 4 T superconducting solenoid with 6 m diameter by 12.5 m long free bore, enclosed inside a 10000-ton return yoke made of construction steel. Accurate characterization of the magnetic field everywhere in the CMS detector is required. During two major tests of the CMS magnet the magnetic flux density was measured inside the coil in a cylinder of 3.448 m diameter and 7 m length with a specially designed field-mapping pneumatic machine as well as in 140 discrete regions of the CMS yoke with NMR probes, 3-D Hall sensors and flux-loops. A TOSCA 3-D model of the CMS magnet has been developed to describe the magnetic field everywhere outside the tracking volume measured with the field-mapping machine. A volume based representation of the magnetic field is used to provide the CMS simulation and reconstruction software with the magnetic field ...
Performance maps for the control of thermal energy storage
DEFF Research Database (Denmark)
Finck, Christian; Li, Rongling; Zeiler, Wim
2017-01-01
Predictive control in building energy systems requires the integration of the building, building system, and component dynamics. The prediction accuracy of these dynamics is crucial for practical applications. This paper introduces performance maps for the control of water tanks, phase change mat...... material tanks, and thermochemical material tanks. The results show that these performance maps can fully account for the dynamics of thermal energy storage tanks.......Predictive control in building energy systems requires the integration of the building, building system, and component dynamics. The prediction accuracy of these dynamics is crucial for practical applications. This paper introduces performance maps for the control of water tanks, phase change...
Insights into earthquake hazard map performance from shaking history simulations
Stein, S.; Vanneste, K.; Camelbeeck, T.; Vleminckx, B.
2017-12-01
Why recent large earthquakes caused shaking stronger than predicted by earthquake hazard maps is under debate. This issue has two parts. Verification involves how well maps implement probabilistic seismic hazard analysis (PSHA) ("have we built the map right?"). Validation asks how well maps forecast shaking ("have we built the right map?"). We explore how well a map can ideally perform by simulating an area's shaking history and comparing "observed" shaking to that predicted by a map generated for the same parameters. The simulations yield shaking distributions whose mean is consistent with the map, but individual shaking histories show large scatter. Infrequent large earthquakes cause shaking much stronger than mapped, as observed. Hence, PSHA seems internally consistent and can be regarded as verified. Validation is harder because an earthquake history can yield shaking higher or lower than that predicted while being consistent with the hazard map. The scatter decreases for longer observation times because the largest earthquakes and resulting shaking are increasingly likely to have occurred. For the same reason, scatter is much less for the more active plate boundary than for a continental interior. For a continental interior, where the mapped hazard is low, even an M4 event produces exceedances at some sites. Larger earthquakes produce exceedances at more sites. Thus many exceedances result from small earthquakes, but infrequent large ones may cause very large exceedances. However, for a plate boundary, an M6 event produces exceedance at only a few sites, and an M7 produces them in a larger, but still relatively small, portion of the study area. As reality gives only one history, and a real map involves assumptions about more complicated source geometries and occurrence rates, which are unlikely to be exactly correct and thus will contribute additional scatter, it is hard to assess whether misfit between actual shaking and a map — notably higher-than-mapped
Mapping strategy, structure, ownership and performance in European corporations : Introduction
Colli, A.; Iversen, M.J.; de Jong, A.
2011-01-01
This paper is the introduction to the Business History special issue on European Business Models. The volume presents results of the international project about mapping European corporations, within the strategy, structure, ownership and performance (SSOP) framework. The paper describes the
Susanti, Hesty; Suprijanto, Kurniadi, Deddy
2018-02-01
Needle visibility in ultrasound-guided technique has been a crucial factor for successful interventional procedure. It has been affected by several factors, i.e. puncture depth, insertion angle, needle size and material, and imaging technology. The influences of those factors made the needle not always well visible. 20 G needles of 15 cm length (Nano Line, facet) were inserted into water bath with variation of insertion angles and depths. Ultrasound measurements are performed with BK-Medical Flex Focus 800 using 12 MHz linear array and 5 MHz curved array in Ultrasound Guided Regional Anesthesia mode. We propose 3 criteria to evaluate needle visibility, i.e. maximum intensity, mean intensity, and the ratio between minimum and maximum intensity. Those criteria were then depicted into representative maps for practical purpose. The best criterion candidate for representing the needle visibility was criterion 1. Generally, the appearance pattern of the needle from this criterion was relatively consistent, i.e. for linear array, it was relatively poor visibility in the middle part of the shaft, while for curved array, it is relatively better visible toward the end of the shaft. With further investigations, for example with the use of tissue-mimicking phantom, the representative maps can be built for future practical purpose, i.e. as a tool for clinicians to ensure better needle placement in clinical application. It will help them to avoid the "dead" area where the needle is not well visible, so it can reduce the risks of vital structures traversing and the number of required insertion, resulting in less patient morbidity. Those simple criteria and representative maps can be utilized to evaluate general visibility patterns of the needle in vast range of needle types and sizes in different insertion media. This information is also important as an early investigation for future research of needle visibility improvement, i.e. the development of beamforming strategies and
Sensor-Motor Maps for Describing Linear Reflex Composition in Hopping.
Schumacher, Christian; Seyfarth, André
2017-01-01
In human and animal motor control several sensory organs contribute to a network of sensory pathways modulating the motion depending on the task and the phase of execution to generate daily motor tasks such as locomotion. To better understand the individual and joint contribution of reflex pathways in locomotor tasks, we developed a neuromuscular model that describes hopping movements. In this model, we consider the influence of proprioceptive length (LFB), velocity (VFB) and force feedback (FFB) pathways of a leg extensor muscle on hopping stability, performance and efficiency (metabolic effort). Therefore, we explore the space describing the blending of the monosynaptic reflex pathway gains. We call this reflex parameter space a sensor-motor map . The sensor-motor maps are used to visualize the functional contribution of sensory pathways in multisensory integration. We further evaluate the robustness of these sensor-motor maps to changes in tendon elasticity, body mass, segment length and ground compliance. The model predicted that different reflex pathway compositions selectively optimize specific hopping characteristics (e.g., performance and efficiency). Both FFB and LFB were pathways that enable hopping. FFB resulted in the largest hopping heights, LFB enhanced hopping efficiency and VFB had the ability to disable hopping. For the tested case, the topology of the sensor-motor maps as well as the location of functionally optimal compositions were invariant to changes in system designs (tendon elasticity, body mass, segment length) or environmental parameters (ground compliance). Our results indicate that different feedback pathway compositions may serve different functional roles. The topology of the sensor-motor map was predicted to be robust against changes in the mechanical system design indicating that the reflex system can use different morphological designs, which does not apply for most robotic systems (for which the control often follows a specific
Simulating the performance of a distance-3 surface code in a linear ion trap
Trout, Colin J.; Li, Muyuan; Gutiérrez, Mauricio; Wu, Yukai; Wang, Sheng-Tao; Duan, Luming; Brown, Kenneth R.
2018-04-01
We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of ≥99.9% for the logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis of the error subsets from the importance sampling method used to bound the logical error rates to gain insight into which error sources are particularly detrimental to error correction.
MAPPING THE LINEARLY POLARIZED SPECTRAL LINE EMISSION AROUND THE EVOLVED STAR IRC+10216
Energy Technology Data Exchange (ETDEWEB)
Girart, J. M. [Institut de Ciencies de l' Espai, (CSIC-IEEC), Campus UAB, Facultat de Ciencies, C5p 2, 08193 Bellaterra, Catalunya (Spain); Patel, N. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Vlemmings, W. H. T. [Department of Earth and Space Sciences, Chalmers University of Technology, Onsala Space Observatory, SE-439 92 Onsala (Sweden); Rao, Ramprasad, E-mail: girart@ice.cat [Submillimeter Array, Academia Sinica Institute of Astronomy and Astrophysics, 645 N. Aohoku Place, Hilo, HI 96720 (United States)
2012-05-20
We present spectro-polarimetric observations of several molecular lines obtained with the Submillimeter Array toward the carbon-rich asymptotic giant branch star IRC+10216. We have detected and mapped the linear polarization of the CO 3-2, SiS 19-18, and CS 7-6 lines. The polarization arises at a distance of {approx_equal} 450 AU from the star and is blueshifted with respect to the Stokes I. The SiS 19-18 polarization pattern appears to be consistent with a locally radial magnetic field configuration. However, the CO 3-2 and CS 7-6 line polarization suggests an overall complex magnetic field morphology within the envelope. This work demonstrates the feasibility of using spectro-polarimetric observations to carry out tomographic imaging of the magnetic field in circumstellar envelopes.
Theory and praxis pf map analsys in CHEF part 1: Linear normal form
Energy Technology Data Exchange (ETDEWEB)
Michelotti, Leo; /Fermilab
2008-10-01
This memo begins a series which, put together, could comprise the 'CHEF Documentation Project' if there were such a thing. The first--and perhaps only--three will telegraphically describe theory, algorithms, implementation and usage of the normal form map analysis procedures encoded in CHEF's collection of libraries. [1] This one will begin the sequence by explaining the linear manipulations that connect the Jacobian matrix of a symplectic mapping to its normal form. It is a 'Reader's Digest' version of material I wrote in Intermediate Classical Dynamics (ICD) [2] and randomly scattered across technical memos, seminar viewgraphs, and lecture notes for the past quarter century. Much of its content is old, well known, and in some places borders on the trivial.1 Nevertheless, completeness requires their inclusion. The primary objective is the 'fundamental theorem' on normalization written on page 8. I plan to describe the nonlinear procedures in a subsequent memo and devote a third to laying out algorithms and lines of code, connecting them with equations written in the first two. Originally this was to be done in one short paper, but I jettisoned that approach after its first section exceeded a dozen pages. The organization of this document is as follows. A brief description of notation is followed by a section containing a general treatment of the linear problem. After the 'fundamental theorem' is proved, two further subsections discuss the generation of equilibrium distributions and issue of 'phase'. The final major section reviews parameterizations--that is, lattice functions--in two and four dimensions with a passing glance at the six-dimensional version. Appearances to the contrary, for the most part I have tried to restrict consideration to matters needed to understand the code in CHEF's libraries.
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
We investigate sparse non-linear denoising of functional brain images by kernel Principal Component Analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as ”the pre-image problem”. Since the feature space mapping is typi...
An examination of medical linear accelerator ion-chamber performance
International Nuclear Information System (INIS)
Karolis, C.; Lee, C.; Rinks, A.
1996-01-01
Full text: The company ( Radiation Oncology Physics and Engineering Services Pty Ltd) provides medical physics services to four radiotherapy centres in NSW with a total of 6 high energy medical linear accelerators manufactured by three different companies. As part of the services, the stability of the accelerator ion chamber system is regularly examined for constancy and periodically for absolute calibration. Each accelerator ion chamber has exhibited undesirable behaviour from time to time, sometimes leading to its replacement. This presentation describes the performance of the ion chambers for some of the linacs over a period of 12-18 months and the steps taken by the manufacturer to address the problems encountered. As part of our commissioning procedure of new linacs, an absolute calibration of the accelerator output (photon and electron beams) is repeated several times over the period following examination of the physical properties of the radiation beams. These calibrations were undertaken in water using the groups calibrated ion chamber/electrometer system and were accompanied by constancy checks using an acrylic phantom and field instruments. Constancy checks were performed daily for a period of 8 weeks during the initial life of the accelerator and thereafter weekly. For one accelerator, the ion chamber was replaced 6 times in the first eighteen months of its life due to severe drifts in output, found to be due to pressure changes in one half of the chamber In another accelerator, erratic swings of 2% were observed for a period of nine months, particularly with the electron beams, before the manufacturer offered to change the chamber with another constructed from different materials. In yet another accelerator the ion chamber has shown consistent erratic behaviour, but this has not been addressed by the manufacturer. In another popular accelerator, the dosimetry was found to be very stable until some changes in the tuning were introduced resulting in small
Spatiotemporal chaos in mixed linear-nonlinear two-dimensional coupled logistic map lattice
Zhang, Ying-Qian; He, Yi; Wang, Xing-Yuan
2018-01-01
We investigate a new spatiotemporal dynamics with mixing degrees of nonlinear chaotic maps for spatial coupling connections based on 2DCML. Here, the coupling methods are including with linear neighborhood coupling and the nonlinear chaotic map coupling of lattices, and the former 2DCML system is only a special case in the proposed system. In this paper the criteria such Kolmogorov-Sinai entropy density and universality, bifurcation diagrams, space-amplitude and snapshot pattern diagrams are provided in order to investigate the chaotic behaviors of the proposed system. Furthermore, we also investigate the parameter ranges of the proposed system which holds those features in comparisons with those of the 2DCML system and the MLNCML system. Theoretical analysis and computer simulation indicate that the proposed system contains features such as the higher percentage of lattices in chaotic behaviors for most of parameters, less periodic windows in bifurcation diagrams and the larger range of parameters for chaotic behaviors, which is more suitable for cryptography.
Energy Technology Data Exchange (ETDEWEB)
Ureba, A. [Dpto. Fisiología Médica y Biofísica. Facultad de Medicina, Universidad de Sevilla, E-41009 Sevilla (Spain); Salguero, F. J. [Nederlands Kanker Instituut, Antoni van Leeuwenhoek Ziekenhuis, 1066 CX Ámsterdam, The Nederlands (Netherlands); Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A., E-mail: alplaza@us.es [Dpto. Fisiología Médica y Biofísica, Facultad de Medicina, Universidad de Sevilla, E-41009 Sevilla (Spain); Miras, H. [Servicio de Radiofísica, Hospital Universitario Virgen Macarena, E-41009 Sevilla (Spain); Linares, R.; Perucha, M. [Servicio de Radiofísica, Hospital Infanta Luisa, E-41010 Sevilla (Spain)
2014-08-15
irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. Conclusions: A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.
International Nuclear Information System (INIS)
Ureba, A.; Salguero, F. J.; Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A.; Miras, H.; Linares, R.; Perucha, M.
2014-01-01
irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. Conclusions: A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time
Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A
2014-08-01
with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.
Stirling convertor performance mapping test results
Qiu, Songgang; Peterson, Allen A.; White, Maurice A.; Faultersack, Franklyn; Redinger, Darin L.; Petersen, Stephen L.
2002-01-01
The Department of Energy (DOE) has selected Free-Piston Stirling Convertors as a technology for future advanced radioisotope space power systems. In August 2000, DOE awarded competitive Phase I, Stirling Radioisotope Generator (SRG) power system integration contracts to three major aerospace contractors, resulting in SRG conceptual designs in February 2001. All three contractors based their designs on the Technology Demonstration Convertor (TDC) developed by Stirling Technology Company (STC) for DOE. The contract award to a single system integration contractor for Phases II and III of the SRG program is anticipated in late 2001. The first potential SRG mission is targeted for a Mars rover. Recent TDC performance data are provided in this paper, together with predictions from Stirling simulation models. .
A new active absorption system and its performance to linear and non-linear waves
DEFF Research Database (Denmark)
Andersen, Thomas Lykke; Clavero, M.; Frigaard, Peter Bak
2016-01-01
Highlights •An active absorption system for wavemakers has been developed. •The theory for flush mounted gauges has been extended to cover also small gaps. •The new system has been validated in a wave flume with wavemakers in both ends. •A generation and absorption procedure for highly non-linear...
Tom, C. H.; Miller, L. D.
1984-01-01
The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.
Sensor-Motor Maps for Describing Linear Reflex Composition in Hopping
Directory of Open Access Journals (Sweden)
Christian Schumacher
2017-11-01
Full Text Available In human and animal motor control several sensory organs contribute to a network of sensory pathways modulating the motion depending on the task and the phase of execution to generate daily motor tasks such as locomotion. To better understand the individual and joint contribution of reflex pathways in locomotor tasks, we developed a neuromuscular model that describes hopping movements. In this model, we consider the influence of proprioceptive length (LFB, velocity (VFB and force feedback (FFB pathways of a leg extensor muscle on hopping stability, performance and efficiency (metabolic effort. Therefore, we explore the space describing the blending of the monosynaptic reflex pathway gains. We call this reflex parameter space a sensor-motor map. The sensor-motor maps are used to visualize the functional contribution of sensory pathways in multisensory integration. We further evaluate the robustness of these sensor-motor maps to changes in tendon elasticity, body mass, segment length and ground compliance. The model predicted that different reflex pathway compositions selectively optimize specific hopping characteristics (e.g., performance and efficiency. Both FFB and LFB were pathways that enable hopping. FFB resulted in the largest hopping heights, LFB enhanced hopping efficiency and VFB had the ability to disable hopping. For the tested case, the topology of the sensor-motor maps as well as the location of functionally optimal compositions were invariant to changes in system designs (tendon elasticity, body mass, segment length or environmental parameters (ground compliance. Our results indicate that different feedback pathway compositions may serve different functional roles. The topology of the sensor-motor map was predicted to be robust against changes in the mechanical system design indicating that the reflex system can use different morphological designs, which does not apply for most robotic systems (for which the control often follows a
Alconis, Jenalyn; Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo; Lester Saddi, Ivan; Mongaya, Candeze; Figueroa, Kathleen Gay
2014-05-01
In response to the slew of disasters that devastates the Philippines on a regular basis, the national government put in place a program to address this problem. The Nationwide Operational Assessment of Hazards, or Project NOAH, consolidates the diverse scientific research being done and pushes the knowledge gained to the forefront of disaster risk reduction and management. Current activities of the project include installing rain gauges and water level sensors, conducting LIDAR surveys of critical river basins, geo-hazard mapping, and running information education campaigns. Approximately 700 automated weather stations and rain gauges installed in strategic locations in the Philippines hold the groundwork for the rainfall visualization system in the Project NOAH web portal at http://noah.dost.gov.ph. The system uses near real-time data from these stations installed in critical river basins. The sensors record the amount of rainfall in a particular area as point data updated every 10 to 15 minutes. The sensor sends the data to a central server either via GSM network or satellite data transfer for redundancy. The web portal displays the sensors as a placemarks layer on a map. When a placemark is clicked, it displays a graph of the rainfall data for the past 24 hours. The rainfall data is harvested by batch determined by a one-hour time frame. The program uses linear interpolation as the methodology implemented to visually represent a near real-time rainfall map. The algorithm allows very fast processing which is essential in near real-time systems. As more sensors are installed, precision is improved. This visualized dataset enables users to quickly discern where heavy rainfall is concentrated. It has proven invaluable on numerous occasions, such as last August 2013 when intense to torrential rains brought about by the enhanced Southwest Monsoon caused massive flooding in Metro Manila. Coupled with observations from Doppler imagery and water level sensors along the
Comparison of BiLinearly Interpolated Subpixel Sensitivity Mapping and Pixel-Level Decorrelation
Challener, Ryan C.; Harrington, Joseph; Cubillos, Patricio; Foster, Andrew S.; Deming, Drake; WASP Consortium
2016-10-01
Exoplanet eclipse signals are weaker than the systematics present in the Spitzer Space Telescope's Infrared Array Camera (IRAC), and thus the correction method can significantly impact a measurement. BiLinearly Interpolated Subpixel Sensitivity (BLISS) mapping calculates the sensitivity of the detector on a subpixel grid and corrects the photometry for any sensitivity variations. Pixel-Level Decorrelation (PLD) removes the sensitivity variations by considering the relative intensities of the pixels around the source. We applied both methods to WASP-29b, a Saturn-sized planet with a mass of 0.24 ± 0.02 Jupiter masses and a radius of 0.84 ± 0.06 Jupiter radii, which we observed during eclipse twice with the 3.6 µm and once with the 4.5 µm channels of IRAC aboard Spitzer in 2010 and 2011 (programs 60003 and 70084, respectively). We compared the results of BLISS and PLD, and comment on each method's ability to remove time-correlated noise. WASP-29b exhibits a strong detection at 3.6 µm and no detection at 4.5 µm. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
Linear Maps on Upper Triangular Matrices Spaces Preserving Idempotent Tensor Products
Directory of Open Access Journals (Sweden)
Li Yang
2014-01-01
Full Text Available Suppose m, n≥2 are positive integers. Let n be the space of all n×n complex upper triangular matrices, and let ϕ be an injective linear map on m⊗n. Then ϕ(A⊗B is an idempotent matrix in m⊗n whenever A⊗B is an idempotent matrix in m⊗n if and only if there exists an invertible matrix P∈m⊗n such that ϕ(A⊗B=P(ξ1(A⊗ξ2(BP-1, ∀A∈m, B∈n, or when m=n, ϕ(A⊗B=P(ξ1(B⊗ξ2(AP-1, ∀A∈m, B∈m, where ξ1([aij]=[aij] or ξ1([aij]=[am-i+1, m-j+1] and ξ2([bij]=[bij] or ξ2([bij]=[bn-i+1, n-j+1].
Noise destroys the coexistence of periodic orbits of a piecewise linear map
International Nuclear Information System (INIS)
Wang Can-Jun; Yang Ke-Li; Qu Shi-Xian
2013-01-01
The effects of Gaussian white noise and Gaussian colored noise on the periodic orbits of period-5 (P-5) and period-6 (P-6) in their coexisting domain of a piecewise linear map are investigated numerically. The probability densities of some orbits are calculated. When the noise intensity is D = 0.0001, only the orbits of P-5 exist, and the coexisting phenomenon is destroyed. On the other hand, the self-correlation time τ of the colored noise also affects the coexisting phenomenon. When τ c c , only the orbits of P-5 appear, and the stability of the orbits of P-5 is enhanced. However, when τ > τ' c (τ c and τ c ' are critical values), only the orbits of P-6 exist, and the stability of the P-6 orbits is enhanced greatly. When τ c , the orbits of P-5 and P-6 coexist, but the stability of the P-5 orbits is enhanced and that of P-6 is weakened with τ increasing
Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea
2016-01-01
Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms
Directory of Open Access Journals (Sweden)
Zhuo Zhao
Full Text Available Staphylococcal enterotoxin B (SEB is one of the most potent Staphylococcus aureus exotoxins (SEs. Due to its conserved sequence and stable structure, SEB might be a good candidate antigen for MRSA vaccines. Although cellular immune responses to SEB are well-characterized, much less is known regarding SEB-specific humoral immune responses, particularly regarding detailed epitope mapping. In this study, we utilized a recombinant nontoxic mutant of SEB (rSEB and an AlPO4 adjuvant to immunize BALB/c mice and confirmed that rSEB can induce a high antibody level and effective immune protection against MRSA infection. Next, the antisera of immunized mice were collected, and linear B cell epitopes within SEB were finely mapped using a series of overlapping synthetic peptides. Three immunodominant B cell epitopes of SEB were screened by ELISA, including a novel epitope, SEB205-222, and two known epitopes, SEB97-114 and SEB247-261. Using truncated peptides, an ELISA was performed with peptide-KLH antisera, and the core sequence of the three immunodominant B cell epitopes were verified as SEB97-112, SEB207-222, and SEB247-257. In vitro, all of the immunodominant epitope-specific antisera (anti-SEB97-112, anti-SEB207-222 and anti-SEB247-257 were observed to inhibit SEB-induced T cell mitogenesis and cytokine production from splenic lymphocytes of BALB/c mice. The homology analysis indicated that SEB97-112 and SEB207-222 were well-conserved among different Staphylococcus aureus strains. The 3D crystal structure of SEB indicated that SEB97-112 was in the loop region inside SEB, whereas SEB207-222 and SEB247-257 were in the β-slice region outside SEB. In summary, the fine-mapping of linear B-cell epitopes of the SEB antigen in this study will be useful to understand anti-SEB immunity against MRSA infection further and will be helpful to optimize MRSA vaccine designs that are based on the SEB antigen.
MAPS evaluation report and procedures governing interviews and performance appraisals
HR Department
2006-01-01
Following various improvements to the MAPS report and to the procedures governing interviews and performance appraisals (announced in the CERN Bulletin 48-49/2005), a third information session has been organized for all staff members on Tuesday, 31 January at 10 a.m.: AB Auditorium P (864-1-D02), Human Resources Department Tel. 73566
Systematic mapping review on student's performance analysis using ...
African Journals Online (AJOL)
This paper classify the various existing predicting models that are used for monitoring and improving students' performance at schools and higher learning institutions. It analyses all the areas within the educational data mining methodology. Two databases were chosen for this study and a systematic mapping study was ...
How Concept-Mapping Perception Navigates Student Knowledge Transfer Performance
Tseng, Kuo-Hung; Chang, Chi-Cheng; Lou, Shi-Jer; Tan, Yue; Chiu, Chien-Jung
2012-01-01
The purpose of this paper is to investigate students' perception of concept maps as a learning tool where knowledge transfer is the goal. This article includes an evaluation of the learning performance of 42 undergraduate students enrolled in a nanotech course at a university in Taiwan. Canonical correlation and MANOVA analyses were employed to…
Comparing the performance of various digital soil mapping approaches to map physical soil properties
Laborczi, Annamária; Takács, Katalin; Pásztor, László
2015-04-01
digital soil mapping methods and sets of ancillary variables for producing the most accurate spatial prediction of texture classes in a given area of interest. Both legacy and recently collected data on PSD were used as reference information. The predictor variable data set consisted of digital elevation model and its derivatives, lithology, land use maps as well as various bands and indices of satellite images. Two conceptionally different approaches can be applied in the mapping process. Textural classification can be realized after particle size data were spatially extended by proper geostatistical method. Alternatively, the textural classification is carried out first, followed by the spatial extension through suitable data mining method. According to the first approach, maps of sand, silt and clay percentage have been computed through regression kriging (RK). Since the three maps are compositional (their sum must be 100%), we applied Additive Log-Ratio (alr) transformation, instead of kriging them independently. Finally, the texture class map has been compiled according to the USDA categories from the three maps. Different combinations of reference and training soil data and auxiliary covariables resulted several different maps. On the basis of the other way, the PSD were classified firstly into the USDA categories, then the texture class maps were compiled directly by data mining methods (classification trees and random forests). The various results were compared to each other as well as to the RK maps. The performance of the different methods and data sets has been examined by testing the accuracy of the geostatistically computed and the directly classified results to assess the most predictive and accurate method. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).
Proficient brain for optimal performance: the MAP model perspective.
Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio
2016-01-01
Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.
Proficient brain for optimal performance: the MAP model perspective
Directory of Open Access Journals (Sweden)
Maurizio Bertollo
2016-05-01
Full Text Available Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1 and optimal-controlled (Type 2 performances. Methods. Ten elite shooters (6 male and 4 female with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.
New modified map for digital image encryption and its performance
Suryadi, MT; Yus Trinity Irsan, Maria; Satria, Yudi
2017-10-01
Protection to classified digital data becomes so important in avoiding data manipulation and alteration. The focus of this paper is in data and information protection of digital images form. Protection is provided in the form of encrypted digital image. The encryption process uses a new map, {x}n+1=\\frac{rλ {x}n}{1+λ {(1-{x}n)}2}\\quad ({mod} 1), which is called MS map. This paper will show: the results of digital image encryption using MS map and how the performance is regarding the average time needed for encryption/decryption process; randomness of key stream sequence with NIST test, histogram analysis and goodness of fit test, quality of the decrypted image by PSNR, initial value sensitivity level, and key space. The results show that the average time of the encryption process is relatively same as the decryption process and it depends to types and sizes of the image. Cipherimage (encrypted image) is uniformly distributed since: it passes the goodness of fit test and also the histogram of the cipherimage is flat; key stream, that are generated by MS map, passes frequency (monobit) test, and runs test, which means the key stream is a random sequence; the decrypted image has same quality as the original image; and initial value sensitivity reaches 10-17, and key space reaches 3.24 × 10634. So, that encryption algorithm generated by MS map is more resistant to brute-force attack and known plaintext attack.
Performance Evaluation of Java Based Object Relational Mapping Tools
Directory of Open Access Journals (Sweden)
Shoaib Mahmood Bhatti
2013-04-01
Full Text Available Object persistency is the hot issue in the form of ORM (Object Relational Mapping tools in industry as developers use these tools during software development. This paper presents the performance evaluation of Java based ORM tools. For this purpose, Hibernate, Ebean and TopLinkhave been selected as the ORM tools which are popular and open source. Their performance has been measured from execution point of view. The results show that ORM tools are the good option for the developers considering the system throughput in shorter setbacks and they can be used efficiently and effectively for performing mapping of the objects into the relational dominated world of database, thus creating a hope for a better and well dominated future of this technology.
2-D response mapping of multi-linear silicon drift detectors
International Nuclear Information System (INIS)
Castoldi, A.; Guazzoni, C.; Hartmann, R.; Mezza, D.; Strueder, L.; Tassan Garofolo, F.
2010-01-01
Multi-linear silicon drift detectors (MLSDDs) are good candidates to fulfill simultaneous requirements for 2-D position-sensing and spectroscopy applications. The optimization of their design and performance as 2-D imagers requires a detailed study of timing properties of the charge cloud in the MLSDD architecture. In particular it is important to experimentally determine the dependence of the measured amplitude and time-of-arrival on the photon position of interaction so as to derive the 2D detector response. In this paper we will present a detailed experimental characterization aimed at measuring the detector amplitude response and its timing response. The dependence of charge cloud drift time on precise position of interaction has been measured as a function of detector biasing conditions.
Linear Motion Systems. A Modular Approach for Improved Straightness Performance
Nijsse, G.J.P.
2001-01-01
This thesis deals with straight motion systems. A modular approach has been applied in order to find ways to improve the performance. The main performance parameters that are considered are position accuracy, repeatability and, to a lesser extent, cost. Because of the increasing requirements to
Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan
2017-08-28
The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical
Performances of One-Round Walks in Linear Congestion Games
Bilò, Vittorio; Fanelli, Angelo; Flammini, Michele; Moscardelli, Luca
We investigate the approximation ratio of the solutions achieved after a one-round walk in linear congestion games. We consider the social functions {Stextsc{um}}, defined as the sum of the players’ costs, and {Mtextsc{ax}}, defined as the maximum cost per player, as a measure of the quality of a given solution. For the social function {Stextsc{um}} and one-round walks starting from the empty strategy profile, we close the gap between the upper bound of 2+sqrt{5}≈ 4.24 given in [8] and the lower bound of 4 derived in [4] by providing a matching lower bound whose construction and analysis require non-trivial arguments. For the social function {Mtextsc{ax}}, for which, to the best of our knowledge, no results were known prior to this work, we show an approximation ratio of Θ(sqrt[4]{n^3}) (resp. Θ(nsqrt{n})), where n is the number of players, for one-round walks starting from the empty (resp. an arbitrary) strategy profile.
MAPS evaluation report and procedures governing interviews and performance appraisals
HR Department
2006-01-01
Following various improvements made to the MAPS report and to the procedures governing interviews and performance appraisals (announced in the CERN Weekly Bulletin 48-49/2005), three information sessions have been organized for all staff members: 24 January 10:00: AB Auditorium P (864-1-D02), 26 January 14:00: Main Amphitheatre, 31 January 10:00: AB Auditorium P (864-1-D02). Human Resources Department Tel. 73566
Directory of Open Access Journals (Sweden)
Wei Chen
2017-11-01
Full Text Available A landslide susceptibility map plays an essential role in urban and rural planning. The main purpose of this study is to establish a variable-weighted linear combination model (VWLC and assess its potential for landslide susceptibility mapping. Firstly, different objective methods are employed for data processing rather than the frequently-used subjective judgments: K-means clustering is used for classification; binarization is introduced to determine buffer length thresholds for locational elements (road, river, and fault; landslide area density is adopted as the contribution index; and a correlation analysis is conducted for suitable factor selection. Secondly, considering the dimension changes of the preference matrix varying with the different locations of the mapping cells, the variable weights of each optimal factor are determined based on the improved analytic hierarchy process (AHP. On this basis, the VWLC model is established and applied to regional landslide susceptibility mapping for the Shennongjia Forestry District, China, where shallow landslides frequently occur. The obtained map is then compared with a map using the traditional WLC, and the results of the comparison show that VWLC is more reasonable, with a higher accuracy, and can be used anywhere that has the same or similar geological and topographical conditions.
Mapping the Conjugate Gradient Algorithm onto High Performance Heterogeneous Computers
2014-05-01
Solution of sparse indefinite systems of linear equations. Society for Industrial and Applied Mathematis 12(4), 617 –629. Parker, M. ( 2009 ). Taking advantage...44 vii 11 LIST OF SYMBOLS, ABBREVIATIONS, AND NOMENCLATURE API Application Programming Interface ASIC Application Specific Integrated Circuit...FPGA designer, 1 16 2 thus, final implementations were nearly always performed using fixed-point or integer arithmetic (Parker 2009 ). With the recent
Design and performance testing of an ultrasonic linear motor with dual piezoelectric actuators.
Smithmaitrie, Pruittikorn; Suybangdum, Panumas; Laoratanakul, Pitak; Muensit, Nantakan
2012-05-01
In this work, design and performance testing of an ultrasonic linear motor with dual piezoelectric actuator patches are studied. The motor system consists of a linear stator, a pre-load weight, and two piezoelectric actuator patches. The piezoelectric actuators are bonded with the linear elastic stator at specific locations. The stator generates propagating waves when the piezoelectric actuators are subjected to harmonic excitations. Vibration characteristics of the linear stator are analyzed and compared with finite element and experimental results. The analytical, finite element, and experimental results show agreement. In the experiments, performance of the ultrasonic linear motor is tested. Relationships between velocity and pre-load weight, velocity and applied voltage, driving force and applied voltage, and velocity and driving force are reported. The design of the dual piezoelectric actuators yields a simpler structure with a smaller number of actuators and lower stator stiffness compared with a conventional design of an ultrasonic linear motor with fully laminated piezoelectric actuators.
Min, Byungchae; Song, Sangjin; Noh, Kiyoul; Kim, Geonwoo; Yoon, Teaseung; Na, Sangkyung; Song, Sanghoon; Yang, Jangsik; Choi, Gyungmin; Kim, Duckjool
2016-01-01
A linear compressor for a domestic refrigerator-freezer has energy saving potential compared with a reciprocating compressor because of a low friction loss and free piston system. A linear compressor can control the piston stroke since it does not have mechanical restriction of piston movement. Therefore, the energy consumption of a domestic refrigerator-freezer using a linear compressor can be reduced by changing the cooling capacity of the compressor. In order to investigate the performance...
Improvement of the dynamic performance of an AC linear permanent magnet machine
Jansen, J.W.; Lomonova, E.; Vandenput, A.J.A.; Compter, J.C.; Verweij, A.H.
2003-01-01
This paper discusses the controller design and test approaches leading to the performance improvement of a brushless 3-phase AC synchronous permanent magnet linear machine. The feasible controller design concept for the linear machine is presented and further implemented in Simulink and dSPACE. Two
Karimi, Samaneh; Abdulkhani, Ali; Tahir, Paridah Md; Dufresne, Alain
2016-10-01
Cellulosic nanofibers (NFs) from kenaf bast were used to reinforce glycerol plasticized thermoplastic starch (TPS) matrices with varying contents (0-10wt%). The composites were prepared by casting/evaporation method. Raw fibers (RFs) reinforced TPS films were prepared with the same contents and conditions. The aim of study was to investigate the effects of filler dimension and loading on linear and non-linear mechanical performance of fabricated materials. Obtained results clearly demonstrated that the NF-reinforced composites had significantly greater mechanical performance than the RF-reinforced counterparts. This was attributed to the high aspect ratio and nano dimension of the reinforcing agents, as well as their compatibility with the TPS matrix, resulting in strong fiber/matrix interaction. Tensile strength and Young's modulus increased by 313% and 343%, respectively, with increasing NF content from 0 to 10wt%. Dynamic mechanical analysis (DMA) revealed an elevational trend in the glass transition temperature of amylopectin-rich domains in composites. The most eminent record was +18.5°C shift in temperature position of the film reinforced with 8% NF. This finding implied efficient dispersion of nanofibers in the matrix and their ability to form a network and restrict mobility of the system. Copyright © 2016 Elsevier B.V. All rights reserved.
Student Connections of Linear Algebra Concepts: An Analysis of Concept Maps
Lapp, Douglas A.; Nyman, Melvin A.; Berry, John S.
2010-01-01
This article examines the connections of linear algebra concepts in a first course at the undergraduate level. The theoretical underpinnings of this study are grounded in the constructivist perspective (including social constructivism), Vernaud's theory of conceptual fields and Pirie and Kieren's model for the growth of mathematical understanding.…
Identification and fine mapping of a linear B cell epitope of human vimentin
DEFF Research Database (Denmark)
Dam, Catharina Essendrup; Houen, Gunnar; Hansen, Paul R.
2014-01-01
Knowledge about antibody-antigen interactions is important for the understanding of the immune system mechanisms and for supporting development of drugs and biomarkers. A tool for identification of these antigenic epitopes of specific antibodies is epitope mapping. In this study, a modified enzyme......-linked immunosorbent assay was applied for epitope mapping of a mouse monoclonal vimentin antibody using overlapping resin-bound peptides covering the entire vimentin protein. The minimal epitope required for binding was identified as the LDSLPLVD sequence using N- and C-terminally truncated peptides. The peptide...... sequence LDSLPLVDTH was identified as the complete epitope, corresponding to amino acids 428-437 in the C-terminal end of the human vimentin protein. Alanine scanning and functionality scanning applying substituted peptides were used to identify amino acids essential for antibody reactivity. In particular...
Janouškovec, Jan
2013-08-22
The canonical photosynthetic plastid genomes consist of a single circular-mapping chromosome that encodes a highly conserved protein core, involved in photosynthesis and ATP generation. Here, we demonstrate that the plastid genome of the photosynthetic relative of apicomplexans, Chromera velia, departs from this view in several unique ways. Core photosynthesis proteins PsaA and AtpB have been broken into two fragments, which we show are independently transcribed, oligoU-tailed, translated, and assembled into functional photosystem I and ATP synthase complexes. Genome-wide transcription profiles support expression of many other highly modified proteins, including several that contain extensions amounting to hundreds of amino acids in length. Canonical gene clusters and operons have been fragmented and reshuffled into novel putative transcriptional units. Massive genomic coverage by paired-end reads, coupled with pulsed-field gel electrophoresis and polymerase chain reaction, consistently indicate that the C. velia plastid genome is linear-mapping, a unique state among all plastids. Abundant intragenomic duplication probably mediated by recombination can explain protein splits, extensions, and genome linearization and is perhaps the key driving force behind the many features that defy the conventional ways of plastid genome architecture and function. © The Author 2013.
Effects of supervised Self Organising Maps parameters on classification performance.
Ballabio, Davide; Vasighi, Mahdi; Filzmoser, Peter
2013-02-26
Self Organising Maps (SOMs) are one of the most powerful learning strategies among neural networks algorithms. SOMs have several adaptable parameters and the selection of appropriate network architectures is required in order to make accurate predictions. The major disadvantage of SOMs is probably due to the network optimisation, since this procedure can be often time-expensive. Effects of network size, training epochs and learning rate on the classification performance of SOMs are known, whereas the effect of other parameters (type of SOMs, weights initialisation, training algorithm, topology and boundary conditions) are not so obvious. This study was addressed to analyse the effect of SOMs parameters on the network classification performance, as well as on their computational times, taking into consideration a significant number of real datasets, in order to achieve a comprehensive statistical comparison. Parameters were contemporaneously evaluated by means of an approach based on the design of experiments, which enabled the investigation of their interaction effects. Results highlighted the most important parameters which influence the classification performance and enabled the identification of the optimal settings, as well as the optimal architectures to reduce the computational time of SOMs. Copyright © 2012 Elsevier B.V. All rights reserved.
Dynamics and bifurcations of a three-dimensional piecewise-linear integrable map
International Nuclear Information System (INIS)
Tuwankotta, J M; Quispel, G R W; Tamizhmani, K M
2004-01-01
In this paper, we consider a four-parameter family of piecewise-linear ordinary difference equations (OΔEs) in R 3 . This system is obtained as a limit of another family of three-dimensional integrable systems of OΔEs. We prove that the limiting procedure sends integrals of the original system to integrals of the limiting system. We derive some results for the solutions such as boundedness of solutions and the existence of periodic solutions. We describe all topologically different shapes of the integral manifolds and present all possible scenarios of transitions as we vary the natural parameters in the system, i.e. the values of the integrals
International Nuclear Information System (INIS)
Sunyaev, Rashid A.; Khatri, Rishi
2013-01-01
y-type spectral distortions of the cosmic microwave background allow us to detect clusters and groups of galaxies, filaments of hot gas and the non-uniformities in the warm hot intergalactic medium. Several CMB experiments (on small areas of sky) and theoretical groups (for full sky) have recently published y-type distortion maps. We propose to search for two artificial hot spots in such y-type maps resulting from the incomplete subtraction of the effect of the motion induced dipole on the cosmic microwave background sky. This dipole introduces, at second order, additional temperature and y-distortion anisotropy on the sky of amplitude few μK which could potentially be measured by Planck HFI and Pixie experiments and can be used as a source of cross channel calibration by CMB experiments. This y-type distortion is present in every pixel and is not the result of averaging the whole sky. This distortion, calculated exactly from the known linear dipole, can be subtracted from the final y-type maps, if desired
Hashemi, Sayed Masoud; Lee, Young; Eriksson, Markus; Nordström, Hâkan; Mainprize, James; Grouza, Vladimir; Huynh, Christopher; Sahgal, Arjun; Song, William Y.; Ruschin, Mark
2017-03-01
A Contrast and Attenuation-map (CT-number) Linearity Improvement (CALI) framework is proposed for cone-beam CT (CBCT) images used for brain stereotactic radiosurgery (SRS). The proposed framework is used together with our high spatial resolution iterative reconstruction algorithm and is tailored for the Leksell Gamma Knife ICON (Elekta, Stockholm, Sweden). The incorporated CBCT system in ICON facilitates frameless SRS planning and treatment delivery. The ICON employs a half-cone geometry to accommodate the existing treatment couch. This geometry increases the amount of artifacts and together with other physical imperfections causes image inhomogeneity and contrast reduction. Our proposed framework includes a preprocessing step, involving a shading and beam-hardening artifact correction, and a post-processing step to correct the dome/capping artifact caused by the spatial variations in x-ray energy generated by bowtie-filter. Our shading correction algorithm relies solely on the acquired projection images (i.e. no prior information required) and utilizes filtered-back-projection (FBP) reconstructed images to generate a segmented bone and soft-tissue map. Ideal projections are estimated from the segmented images and a smoothed version of the difference between the ideal and measured projections is used in correction. The proposed beam-hardening and dome artifact corrections are segmentation free. The CALI was tested on CatPhan, as well as patient images acquired on the ICON system. The resulting clinical brain images show substantial improvements in soft contrast visibility, revealing structures such as ventricles and lesions which were otherwise un-detectable in FBP-reconstructed images. The linearity of the reconstructed attenuation-map was also improved, resulting in more accurate CT#.
Traveling-wave piezoelectric linear motor part II: experiment and performance evaluation.
Ting, Yung; Li, Chun-Chung; Chen, Liang-Chiang; Yang, Chieh-Min
2007-04-01
This article continues the discussion of a traveling-wave piezoelectric linear motor. Part I of this article dealt with the design and analysis of the stator of a traveling-wave piezoelectric linear motor. In this part, the discussion focuses on the structure and modeling of the contact layer and the carriage. In addition, the performance analysis and evaluation of the linear motor also are dealt with in this study. The traveling wave is created by stator, which is constructed by a series of bimorph actuators arranged in a line and connected to form a meander-line structure. Analytical and experimental results of the performance are presented and shown to be almost in agreement. Power losses due to friction and transmission are studied and found to be significant. Compared with other types of linear motors, the motor in this study is capable of supporting heavier loads and provides a larger thrust force.
International Nuclear Information System (INIS)
Ansar, A.B.; Osaki, Yasuhiro; Kazui, Hiroaki
2006-01-01
Statistical parametric mapping (SPM) was employed to investigate the regional decline in cerebral blood flow (rCBF) as measured by 99m Tc-hexamethyl propylene amine oxime (HMPAO) single photon emission computed tomography (SPECT) in mild Alzheimer's disease (AD). However, the role of the post reconstruction image processing on the interpretation of SPM, which detects rCBF pattern, has not been precisely studied. We performed 99m Tc-HMPAO SPECT in mild AD patients and analyzed the effect of linearization correction for washout of the tracer on the detectability of abnormal perfusion. Eleven mild AD (National Institute of Neurological and Communicative Disorders and National Institute of Radiological Sciences (NINCDS-ADRDA), male/female, 5/6; mean±SD age, 70.6±6.2 years; mean±SD mini-mental state examination score, 23.9±3.41; clinical dementia rating score, 1) and eleven normal control subjects (male/female, 4/7; mean±SD age, 66.8±8.4 years) were enrolled in this study. 99m Tc-HMPAO SPECT was performed with a four-head rotating gamma camera. We employed linearization uncorrected (LU) and linearization corrected (LC) images for the patients and controls. The pattern of hypoperfusion in mild AD on LU and LC images was detected by SPM99 applying the same image standardization and analytical parameters. A statistical inter image-group analysis (LU vs. LC) was also performed. Clear differences were observed between the interpretation of SPM with LU and LC images. Significant hypoperfusion in mild AD was found on the LU images in the left posterior cingulate gyrus, right precuneus, left hippocampus, left uncus, and left superior temporal gyrus (cluster level, corrected p 99m Tc-HMPAO SPECT with or without linearization correction, which should be carefully evaluated when interpreting the pattern of rCBF changes in mild Alzheimer's disease. (author)
I. Udeh; J.O. Isikwenu and G. Ukughere
2011-01-01
The objectives of this study were to compare the performance characteristics of four strains of broiler chicken from 2 to 8 weeks of age and predict body weight of the broilers using linear body measurements. The four strains of broiler chicken used were Anak, Arbor Acre, Ross and Marshall. The parameters recorded were bodyweight, weight gain, total feed intake, feed conversion ratio, mortality and some linear body measurements (body length, body width, breast width, drumstick length, shank l...
International Nuclear Information System (INIS)
Riedel, R.A.; Cooper, R.G.; Funk, L.L.; Clonts, L.G.
2012-01-01
We describe the design and performance of electronics for linear position sensitive neutron detectors. The eight tube assembly requires 10 W of power and can be controlled via digital communication links. The electronics can be used without modification in vacuum. Using a transimpedance amplifier and gated integration, we achieve a highly linear system with coefficient of determinations of 0.9999 or better. Typical resolution is one percent of tube length.
Energy Technology Data Exchange (ETDEWEB)
Riedel, R.A., E-mail: riedelra@ornl.gov [Oak Ridge National Laboratories, Oak Ridge, TN 37830 (United States); Cooper, R.G.; Funk, L.L.; Clonts, L.G. [Oak Ridge National Laboratories, Oak Ridge, TN 37830 (United States)
2012-02-01
We describe the design and performance of electronics for linear position sensitive neutron detectors. The eight tube assembly requires 10 W of power and can be controlled via digital communication links. The electronics can be used without modification in vacuum. Using a transimpedance amplifier and gated integration, we achieve a highly linear system with coefficient of determinations of 0.9999 or better. Typical resolution is one percent of tube length.
Imprint of non-linear effects on HI intensity mapping on large scales
Energy Technology Data Exchange (ETDEWEB)
Umeh, Obinna, E-mail: umeobinna@gmail.com [Department of Physics and Astronomy, University of the Western Cape, Cape Town 7535 (South Africa)
2017-06-01
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.
Ltaief, Hatem; Luszczek, Piotr R.; Dongarra, Jack
2011-01-01
This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2017-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
Directory of Open Access Journals (Sweden)
Hussein Abdel-jaber
2015-10-01
Full Text Available Congestion control is one of the hot research topics that helps maintain the performance of computer networks. This paper compares three Active Queue Management (AQM methods, namely, Adaptive Gentle Random Early Detection (Adaptive GRED, Random Early Dynamic Detection (REDD, and GRED Linear analytical model with respect to different performance measures. Adaptive GRED and REDD are implemented based on simulation, whereas GRED Linear is implemented as a discrete-time analytical model. Several performance measures are used to evaluate the effectiveness of the compared methods mainly mean queue length, throughput, average queueing delay, overflow packet loss probability, and packet dropping probability. The ultimate aim is to identify the method that offers the highest satisfactory performance in non-congestion or congestion scenarios. The first comparison results that are based on different packet arrival probability values show that GRED Linear provides better mean queue length; average queueing delay and packet overflow probability than Adaptive GRED and REDD methods in the presence of congestion. Further and using the same evaluation measures, Adaptive GRED offers a more satisfactory performance than REDD when heavy congestion is present. When the finite capacity of queue values varies the GRED Linear model provides the highest satisfactory performance with reference to mean queue length and average queueing delay and all the compared methods provide similar throughput performance. However, when the finite capacity value is large, the compared methods have similar results in regard to probabilities of both packet overflowing and packet dropping.
Hong, H.; Zhu, A. X.
2017-12-01
Climate change is a common phenomenon and it is very serious all over the world. The intensification of rainfall extremes with climate change is of key importance to society and then it may induce a large impact through landslides. This paper presents GIS-based new ensemble data mining techniques that weight-of-evidence, logistic model tree, linear and quadratic discriminant for landslide spatial modelling. This research was applied in Anfu County, which is a landslide-prone area in Jiangxi Province, China. According to a literature review and research the study area, we select the landslide influencing factor and their maps were digitized in a GIS environment. These landslide influencing factors are the altitude, plan curvature, profile curvature, slope degree, slope aspect, topographic wetness index (TWI), Stream Power Index (SPI), Topographic Wetness Index (SPI), distance to faults, distance to rivers, distance to roads, soil, lithology, normalized difference vegetation index and land use. According to historical information of individual landslide events, interpretation of the aerial photographs, and field surveys supported by the government of Jiangxi Meteorological Bureau of China, 367 landslides were identified in the study area. The landslide locations were divided into two subsets, namely, training and validating (70/30), based on a random selection scheme. In this research, Pearson's correlation was used for the evaluation of the relationship between the landslides and influencing factors. In the next step, three data mining techniques combined with the weight-of-evidence, logistic model tree, linear and quadratic discriminant, were used for the landslide spatial modelling and its zonation. Finally, the landslide susceptibility maps produced by the mentioned models were evaluated by the ROC curve. The results showed that the area under the curve (AUC) of all of the models was > 0.80. At the same time, the highest AUC value was for the linear and quadratic
Performance Testing of a High Temperature Linear Alternator for Stirling Convertors
Metscher, Jonathan F.; Geng, Steven M.
2016-01-01
The NASA Glenn Research Center has conducted performance testing of a high temperature linear alternator (HTLA) in support of Stirling power convertor development for potential future Radioisotope Power Systems (RPS). The high temperature linear alternator is a modified version of that used in Sunpower's Advanced Stirling Convertor (ASC), and is capable of operation at temperatures up to 200 deg. Increasing the temperature capability of the linear alternator could expand the mission set of future Stirling RPS designs. High temperature Neodymium-Iron-Boron (Nd-Fe-B) magnets were selected for the HTLA application, and were fully characterized and tested prior to use. Higher temperature epoxy for alternator assembly was also selected and tested for thermal stability and strength. A characterization test was performed on the HTLA to measure its performance at various amplitudes, loads, and temperatures. HTLA endurance testing at 200 deg is currently underway.
Geoelectrical mapping for improved performance of SUDS in clay tills
DEFF Research Database (Denmark)
Bockhorn, Britta; Møller, Ingelise; Klint, Knud Erik S.
2015-01-01
geological methods, including borehole soil sample descriptions, one excavation description and a near-surface spear auger-mapping project. The experiments returned a significant correlation of geoelectrical and spear auger-mapped surface sediments. Furthermore, a highly permeable oxidized fracture zone...
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
Forkuor, Gerald; Hounkpatin, Ozias K L; Welp, Gerhard; Thiel, Michael
2017-01-01
Accurate and detailed spatial soil information is essential for environmental modelling, risk assessment and decision making. The use of Remote Sensing data as secondary sources of information in digital soil mapping has been found to be cost effective and less time consuming compared to traditional soil mapping approaches. But the potentials of Remote Sensing data in improving knowledge of local scale soil information in West Africa have not been fully explored. This study investigated the use of high spatial resolution satellite data (RapidEye and Landsat), terrain/climatic data and laboratory analysed soil samples to map the spatial distribution of six soil properties-sand, silt, clay, cation exchange capacity (CEC), soil organic carbon (SOC) and nitrogen-in a 580 km2 agricultural watershed in south-western Burkina Faso. Four statistical prediction models-multiple linear regression (MLR), random forest regression (RFR), support vector machine (SVM), stochastic gradient boosting (SGB)-were tested and compared. Internal validation was conducted by cross validation while the predictions were validated against an independent set of soil samples considering the modelling area and an extrapolation area. Model performance statistics revealed that the machine learning techniques performed marginally better than the MLR, with the RFR providing in most cases the highest accuracy. The inability of MLR to handle non-linear relationships between dependent and independent variables was found to be a limitation in accurately predicting soil properties at unsampled locations. Satellite data acquired during ploughing or early crop development stages (e.g. May, June) were found to be the most important spectral predictors while elevation, temperature and precipitation came up as prominent terrain/climatic variables in predicting soil properties. The results further showed that shortwave infrared and near infrared channels of Landsat8 as well as soil specific indices of redness
Directory of Open Access Journals (Sweden)
Gerald Forkuor
Full Text Available Accurate and detailed spatial soil information is essential for environmental modelling, risk assessment and decision making. The use of Remote Sensing data as secondary sources of information in digital soil mapping has been found to be cost effective and less time consuming compared to traditional soil mapping approaches. But the potentials of Remote Sensing data in improving knowledge of local scale soil information in West Africa have not been fully explored. This study investigated the use of high spatial resolution satellite data (RapidEye and Landsat, terrain/climatic data and laboratory analysed soil samples to map the spatial distribution of six soil properties-sand, silt, clay, cation exchange capacity (CEC, soil organic carbon (SOC and nitrogen-in a 580 km2 agricultural watershed in south-western Burkina Faso. Four statistical prediction models-multiple linear regression (MLR, random forest regression (RFR, support vector machine (SVM, stochastic gradient boosting (SGB-were tested and compared. Internal validation was conducted by cross validation while the predictions were validated against an independent set of soil samples considering the modelling area and an extrapolation area. Model performance statistics revealed that the machine learning techniques performed marginally better than the MLR, with the RFR providing in most cases the highest accuracy. The inability of MLR to handle non-linear relationships between dependent and independent variables was found to be a limitation in accurately predicting soil properties at unsampled locations. Satellite data acquired during ploughing or early crop development stages (e.g. May, June were found to be the most important spectral predictors while elevation, temperature and precipitation came up as prominent terrain/climatic variables in predicting soil properties. The results further showed that shortwave infrared and near infrared channels of Landsat8 as well as soil specific indices
Validity of linear encoder measurement of sit-to-stand performance power in older people.
Lindemann, U; Farahmand, P; Klenk, J; Blatzonis, K; Becker, C
2015-09-01
To investigate construct validity of linear encoder measurement of sit-to-stand performance power in older people by showing associations with relevant functional performance and physiological parameters. Cross-sectional study. Movement laboratory of a geriatric rehabilitation clinic. Eighty-eight community-dwelling, cognitively unimpaired older women (mean age 78 years). Sit-to-stand performance power and leg power were assessed using a linear encoder and the Nottingham Power Rig, respectively. Gait speed was measured on an instrumented walkway. Maximum quadriceps and hand grip strength were assessed using dynamometers. Mid-thigh muscle cross-sectional area of both legs was measured using magnetic resonance imaging. Associations of sit-to-stand performance power with power assessed by the Nottingham Power Rig, maximum gait speed and muscle cross-sectional area were r=0.646, r=0.536 and r=0.514, respectively. A linear regression model explained 50% of the variance in sit-to-stand performance power including muscle cross-sectional area (p=0.001), maximum gait speed (p=0.002), and power assessed by the Nottingham Power Rig (p=0.006). Construct validity of linear encoder measurement of sit-to-stand power was shown at functional level and morphological level for older women. This measure could be used in routine clinical practice as well as in large-scale studies. DRKS00003622. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Performance issues, downtime recovery and tuning in the Next Linear Collider (NLC)
International Nuclear Information System (INIS)
Zimmermann, F.; Adolphsen, C.; Assmann, R.
1997-05-01
The Next Linear Collider (NLC) consists of several large subsystems, each of which must be operational and tuned in order to deliver luminosity. Considering specific examples, we study how the different subsystems respond to various perturbations such as ground motion, temperature changes, drifts of beam-position monitors etc., and we estimate the overall time requirements for tuning and downtime recovery of each subsystem. The succession of subsystem failures and recoveries as well as other performance degradations can be modeled as a Markov process, where each subsystem is characterized, e.g., by its failure rate and recovery time. Such a model allows the prediction of the overall NLC availability. Our mathematical description of a linear collider is benchmarked against the known performance of the Stanford Linear Collider (SLC)
Stability, performance and sensitivity analysis of I.I.D. jump linear systems
Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven
2018-06-01
This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.
International Nuclear Information System (INIS)
Obergfell, M.N.
1987-02-01
The Stanford Linear Collider is the newest addition to the high-energy physics research complex at the Stanford Linear Accelerator Center. One of the many unique features of this project is the large, underground pit, where massive particle detectors will study the collision of subatomic particles. The large, open pit utilizes nearly 600 permanent earth anchors (tiebacks) for the support of the 56 ft (17 m) high walls, and is one of the largest applications of tiebacks for permanent support of a structure. This paper examines the use of tiebacks on this project with emphasis on their installation and performance
Directory of Open Access Journals (Sweden)
M. S. MANNA
2011-12-01
Full Text Available The development of electromagnetic devices as machines, transformers, heating devices confronts the engineers with several problems. For the design of an optimized geometry and the prediction of the operational behaviour an accurate knowledge of the dependencies of the field quantities inside the magnetic circuits is necessary. This paper provides the eddy current and core flux density distribution analysis in linear induction motor. Magnetic flux in the air gap of the Linear Induction Motor (LIM is reduced to various losses such as end effects, fringes, effect, skin effects etc. The finite element based software package COMSOL Multiphysics Inc. USA is used to get the reliable and accurate computational results for optimization the performance of Linear Induction Motor (LIM. The geometrical characteristics of LIM are varied to find the optimal point of thrust and minimum flux leakage during static and dynamic conditions.
Performance review of thermionic electron gun developed for RF linear accelerators at RRCAT
International Nuclear Information System (INIS)
Wanmode, Yashwant; Mulchandani, J.; Reddy, T.S.; Bhisikar, A.; Singh, H.G.; Shrivastava, Purushottam
2015-01-01
RRCAT is engaged in development of RF electron linear accelerator for irradiation of industrial and agricultural products. Thermionic electron gun is primary source for this accelerator as beam current in the RF accelerator is modest and thermionic emission is most prevalent option for electron gun development. An electron gun has to meet high cathode emission capability, low filament power, good accessibility for cathode replacement and should provide short time for maintenance. Electron linear accelerator up to beam energy of 10 MeV require electron source of 45-50 keV beam energy and emission current of 1 A. Electron optics of gun and electron beam profile simulations were carried out using CST's particle tracking code and EGUN code. Triode type electron gun of cathode voltage 50 kV pulsed has been designed, developed and integrated with 10 MeV electron linear accelerators at RRCAT. Beam current of more than 600 mA has been measured with faraday cup in the test stand developed for characterizing the electron gun. Two accelerators one is imported and another one developed indigenously has been energized using this electron gun. Beam energy of 5-10 MeV has been achieved with beam current of 250-400 mA by integrating this electron gun with the linear accelerator. This paper reviews the performance of indigenously developed electron gun for both linear accelerators. (author)
Wang, Chuan; Badmaev, Alexander; Jooyaie, Alborz; Bao, Mingqiang; Wang, Kang L; Galatsis, Kosmas; Zhou, Chongwu
2011-05-24
This paper reports the radio frequency (RF) and linearity performance of transistors using high-purity semiconducting carbon nanotubes. High-density, uniform semiconducting nanotube networks are deposited at wafer scale using our APTES-assisted nanotube deposition technique, and RF transistors with channel lengths down to 500 nm are fabricated. We report on transistors exhibiting a cutoff frequency (f(t)) of 5 GHz and with maximum oscillation frequency (f(max)) of 1.5 GHz. Besides the cutoff frequency, the other important figure of merit for the RF transistors is the device linearity. For the first time, we report carbon nanotube RF transistor linearity metrics up to 1 GHz. Without the use of active probes to provide the high impedance termination, the measurement bandwidth is therefore not limited, and the linearity measurements can be conducted at the frequencies where the transistors are intended to be operating. We conclude that semiconducting nanotube-based transistors are potentially promising building blocks for highly linear RF electronics and circuit applications.
Directory of Open Access Journals (Sweden)
N. Jaya
2008-10-01
Full Text Available In this work, a design and implementation of a Conventional PI controller, single region fuzzy logic controller, two region fuzzy logic controller and Globally Linearized Controller (GLC for a two capacity interacting nonlinear process is carried out. The performance of this process using single region FLC, two region FLC and GLC are compared with the performance of conventional PI controller about an operating point of 50 %. It has been observed that GLC and two region FLC provides better performance. Further, this procedure is also validated by real time experimentation using dSPACE.
The Linear Programming to evaluate the performance of Oral Health in Primary Care.
Colussi, Claudia Flemming; Calvo, Maria Cristina Marino; Freitas, Sergio Fernando Torres de
2013-01-01
To show the use of Linear Programming to evaluate the performance of Oral Health in Primary Care. This study used data from 19 municipalities of Santa Catarina city that participated of the state evaluation in 2009 and have more than 50,000 habitants. A total of 40 indicators were evaluated, calculated using the Microsoft Excel 2007, and converted to the interval [0, 1] in ascending order (one indicating the best situation and zero indicating the worst situation). Applying the Linear Programming technique municipalities were assessed and compared among them according to performance curve named "quality estimated frontier". Municipalities included in the frontier were classified as excellent. Indicators were gathered, and became synthetic indicators. The majority of municipalities not included in the quality frontier (values different of 1.0) had lower values than 0.5, indicating poor performance. The model applied to the municipalities of Santa Catarina city assessed municipal management and local priorities rather than the goals imposed by pre-defined parameters. In the final analysis three municipalities were included in the "perceived quality frontier". The Linear Programming technique allowed to identify gaps that must be addressed by city managers to enhance actions taken. It also enabled to observe each municipal performance and compare results among similar municipalities.
DEFF Research Database (Denmark)
Højgaard-Hansen, Kim; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter
2012-01-01
The performance of wireless communication networks has been shown to have a strong location dependence. Measuring the performance while having accurate location information available makes it possible to generate performance maps. In this paper we propose a framework for the generation and use...... of such performance maps. We demonstrate how the framework can be used to reduce the retransmissions and to better utilise network resources when performing TCP-based file downloads in vehicular M2M communication scenarios. The approach works on top of a standard TCP stack hence has to map identified transmission...
Laschober, Tanja C; de Tormes Eby, Lillian Turner
2013-07-01
The main goals of the current study were to investigate whether there are linear or curvilinear relationships between substance use disorder counselors' job performance and actual turnover after 1 year utilizing four indicators of job performance and three turnover statuses (voluntary, involuntary, and no turnover as the reference group). Using longitudinal data from 440 matched counselor-clinical supervisor dyads, results indicate that overall, counselors with lower job performance are more likely to turn over voluntarily and involuntarily than not to turn over. Further, one of the job performance measures shows a significant curvilinear effect. We conclude that the negative consequences often assumed to be "caused" by counselor turnover may be overstated because those who leave both voluntarily and involuntarily demonstrate generally lower performance than those who remain employed at their treatment program.
Laschober, Tanja C.; de Tormes Eby, Lillian Turner
2013-01-01
The main goals of the current study were to investigate whether there are linear or curvilinear relationships between substance use disorder counselors’ job performance and actual turnover after 1 year utilizing four indicators of job performance and three turnover statuses (voluntary, involuntary, and no turnover as the reference group). Using longitudinal data from 440 matched counselor-clinical supervisor dyads, results indicate that overall, counselors with lower job performance are more likely to turn over voluntarily and involuntarily than not to turn over. Further, one of the job performance measures shows a significant curvilinear effect. We conclude that the negative consequences often assumed to be “caused” by counselor turnover may be overstated because those who leave both voluntarily and involuntarily demonstrate generally lower performance than those who remain employed at their treatment program. PMID:22527711
Noise analysis and performance of a selfscanned linear InSb detector array
International Nuclear Information System (INIS)
Finger, G.; Meyer, M.; Moorwood, A.F.M.
1987-01-01
A noise model for detectors operated in the capacitive discharge mode is presented. It is used to analyze the noise performance of the ESO nested timing readout technique applied to a linear 32-element InSb array which is multiplexed by a silicon switched-FET shift register. Analysis shows that KTC noise of the videoline is the major noise contribution; it can be eliminated by weighted double-correlated sampling. Best noise performance of this array is achieved at the smallest possible reverse bias voltage (not more than 20 mV) whereas excess noise is observed at higher reverse bias voltages. 5 references
Directory of Open Access Journals (Sweden)
Lajla Bruntse Hansen
Full Text Available We have recently developed a high-density photolithographic, peptide array technology with a theoretical upper limit of 2 million different peptides per array of 2 cm(2. Here, we have used this to perform complete and exhaustive analyses of linear B cell epitopes of a medium sized protein target using human serum albumin (HSA as an example. All possible overlapping 15-mers from HSA were synthesized and probed with a commercially available polyclonal rabbit anti-HSA antibody preparation. To allow for identification of even the weakest epitopes and at the same time perform a detailed characterization of key residues involved in antibody binding, the array also included complete single substitution scans (i.e. including each of the 20 common amino acids at each position of each 15-mer peptide. As specificity controls, all possible 15-mer peptides from bovine serum albumin (BSA and from rabbit serum albumin (RSA were included as well. The resulting layout contained more than 200.000 peptide fields and could be synthesized in a single array on a microscope slide. More than 20 linear epitope candidates were identified and characterized at high resolution i.e. identifying which amino acids in which positions were needed, or not needed, for antibody interaction. As expected, moderate cross-reaction with some peptides in BSA was identified whereas no cross-reaction was observed with peptides from RSA. We conclude that high-density peptide microarrays are a very powerful methodology to identify and characterize linear antibody epitopes, and should advance detailed description of individual specificities at the single antibody level as well as serologic analysis at the proteome-wide level.
Hansen, Lajla Bruntse; Buus, Soren; Schafer-Nielsen, Claus
2013-01-01
We have recently developed a high-density photolithographic, peptide array technology with a theoretical upper limit of 2 million different peptides per array of 2 cm(2). Here, we have used this to perform complete and exhaustive analyses of linear B cell epitopes of a medium sized protein target using human serum albumin (HSA) as an example. All possible overlapping 15-mers from HSA were synthesized and probed with a commercially available polyclonal rabbit anti-HSA antibody preparation. To allow for identification of even the weakest epitopes and at the same time perform a detailed characterization of key residues involved in antibody binding, the array also included complete single substitution scans (i.e. including each of the 20 common amino acids) at each position of each 15-mer peptide. As specificity controls, all possible 15-mer peptides from bovine serum albumin (BSA) and from rabbit serum albumin (RSA) were included as well. The resulting layout contained more than 200.000 peptide fields and could be synthesized in a single array on a microscope slide. More than 20 linear epitope candidates were identified and characterized at high resolution i.e. identifying which amino acids in which positions were needed, or not needed, for antibody interaction. As expected, moderate cross-reaction with some peptides in BSA was identified whereas no cross-reaction was observed with peptides from RSA. We conclude that high-density peptide microarrays are a very powerful methodology to identify and characterize linear antibody epitopes, and should advance detailed description of individual specificities at the single antibody level as well as serologic analysis at the proteome-wide level.
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh
2012-03-22
The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.
Song, Jinghui; Yuan, Hui; Xia, Yunfeng; Kan, Weimin; Deng, Xiaowen; Liu, Shi; Liang, Wanlong; Deng, Jianhua
2018-03-01
This paper introduces the working principle and system constitution of the linear Fresnel solar lithium bromide absorption refrigeration cycle, and elaborates several typical structures of absorption refrigeration cycle, including single-effect, two-stage cycle and double-effect lithium bromide absorption refrigeration cycle A 1.n effect absorption chiller system based on the best parameters was introduced and applied to a linear Fresnel solar absorption chiller system. Through the field refrigerator performance test, the results show: Based on this heat cycle design and processing 1.n lithium bromide absorption refrigeration power up to 35.2KW, It can meet the theoretical expectations and has good flexibility and reliability, provides guidance for the use of solar thermal energy.
Balancing Energy and Performance in Dense Linear System Solvers for Hybrid ARM+GPU platforms
Directory of Open Access Journals (Sweden)
Juan P. Silva
2016-04-01
Full Text Available The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.
Effects of Concept Mapping Strategy on Learning Performance in Business and Economics Statistics
Chiou, Chei-Chang
2009-01-01
A concept map (CM) is a hierarchically arranged, graphic representation of the relationships among concepts. Concept mapping (CMING) is the process of constructing a CM. This paper examines whether a CMING strategy can be useful in helping students to improve their learning performance in a business and economics statistics course. A single…
The Use of Causal Mapping in the Design of Sustainability Performance Measurement Systems
DEFF Research Database (Denmark)
Parisi, Cristiana
2013-01-01
organisations’ strategic performance measurement systems (SPMSs). This study’s main contribution is the triangulation of multiple qualitative methods to enhance the reliability of causal maps. This innovative approach supports the use of causal mapping to extract managerial tacit knowledge in order to identify...
Ltaief, Hatem
2011-08-31
This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine-grained task parallelism that recasts the computation to operate on submatrices called tiles. In this way tile algorithms are formed. We show results from the power profiling of the most common routines, which permits us to clearly identify the different phases of the computations. This allows us to isolate the bottlenecks in terms of energy efficiency. Our results show that PLASMA surpasses LAPACK not only in terms of performance but also in terms of energy efficiency. © 2011 Springer-Verlag.
Iterative linear solvers in a 2D radiation-hydrodynamics code: Methods and performance
International Nuclear Information System (INIS)
Baldwin, C.; Brown, P.N.; Falgout, R.; Graziani, F.; Jones, J.
1999-01-01
Computer codes containing both hydrodynamics and radiation play a central role in simulating both astrophysical and inertial confinement fusion (ICF) phenomena. A crucial aspect of these codes is that they require an implicit solution of the radiation diffusion equations. The authors present in this paper the results of a comparison of five different linear solvers on a range of complex radiation and radiation-hydrodynamics problems. The linear solvers used are diagonally scaled conjugate gradient, GMRES with incomplete LU preconditioning, conjugate gradient with incomplete Cholesky preconditioning, multigrid, and multigrid-preconditioned conjugate gradient. These problems involve shock propagation, opacities varying over 5--6 orders of magnitude, tabular equations of state, and dynamic ALE (Arbitrary Lagrangian Eulerian) meshes. They perform a problem size scalability study by comparing linear solver performance over a wide range of problem sizes from 1,000 to 100,000 zones. The fundamental question they address in this paper is: Is it more efficient to invert the matrix in many inexpensive steps (like diagonally scaled conjugate gradient) or in fewer expensive steps (like multigrid)? In addition, what is the answer to this question as a function of problem size and is the answer problem dependent? They find that the diagonally scaled conjugate gradient method performs poorly with the growth of problem size, increasing in both iteration count and overall CPU time with the size of the problem and also increasing for larger time steps. For all problems considered, the multigrid algorithms scale almost perfectly (i.e., the iteration count is approximately independent of problem size and problem time step). For pure radiation flow problems (i.e., no hydrodynamics), they see speedups in CPU time of factors of ∼15--30 for the largest problems, when comparing the multigrid solvers relative to diagonal scaled conjugate gradient
Design and thermal performances of a scalable linear Fresnel reflector solar system
International Nuclear Information System (INIS)
Zhu, Yanqing; Shi, Jifu; Li, Yujian; Wang, Leilei; Huang, Qizhang; Xu, Gang
2017-01-01
Highlights: • A scalable linear Fresnel reflector which can supply different temperatures is proposed. • Inclination design of the mechanical structure is used to reduce the end losses. • The maximum thermal efficiency of 64% is achieved in Guangzhou. - Abstract: This paper proposes a scalable linear Fresnel reflector (SLFR) solar system. The optical mirror field which contains an array of linear plat mirrors closed to each other is designed to eliminate the inter-low shading and blocking. Scalable mechanical mirror support which can place different number of mirrors is designed to supply different temperatures. The mechanical structure can be inclined to reduce the end losses. Finally, the thermal efficiency of the SLFR with two stage mirrors is tested. After adjustment, the maximum thermal efficiency of 64% is obtained and the mean thermal efficiency is higher than that before adjustment. The results indicate that the end losses have been reduced effectively by the inclination design and excellent thermal performance can be obtained by the SLFR after adjustment.
Evaluation of linear accelerator performance standards using an outcome oriented approach
International Nuclear Information System (INIS)
Rangel, Alejandra; Ploquin, Nicolas; Kay, Ian; Dunscombe, Peter
2008-01-01
Radiation therapy, along with other branches of medicine, is moving towards a firmer basis in evidence to optimally utilize resources. As new treatment technology and strategies place greater demands on quality assurance resources, the need to objectively evaluate equipment and process performance standards from the perspective of predicted clinical impact becomes more urgent. This study evaluates the appropriateness of recommended quality control tolerance and action levels for linear accelerators based on the calculated dosimetric impact of suboptimal equipment performance. A method is described to quantify the dosimetric changes, as reflected by the changes in the outcome surrogate, equivalent uniform dose (EUD), of machine performance deviations from the optimal, specifically in the range of tolerance and action levels promulgated by the Canadian Association of Provincial Cancer Agencies (CAPCA). Linear accelerator performance deviations were simulated for the treatment of prostate, breast, lung, and brain using 3D conformal techniques, and the impact evaluated in terms of the changes in the EUD of the target volumes and two principal organs at risk (OARs) per site. The eight key performance characteristics examined are: Output constancy, beam flatness, gantry angle, collimator angle, field size indicator, laser alignment (three directions) and, by inference, the optical distance indicator. Currently accepted CAPCA tolerance levels for these eight performance characteristics are shown to maintain average EUD deviations to within 2% for the targets and 2 Gy for the OARs. However, within the 2% or 2 Gy range, the recommended tolerance levels are found to have markedly different effects on the EUDs of the structures of interest
DEFF Research Database (Denmark)
Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian
2015-01-01
The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear...... two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...
Teacher characteristics and student performance: An analysis using hierarchical linear modelling
Directory of Open Access Journals (Sweden)
Paula Armstrong
2015-12-01
Full Text Available This research makes use of hierarchical linear modelling to investigate which teacher characteristics are significantly associated with student performance. Using data from the SACMEQ III study of 2007, an interesting and potentially important finding is that younger teachers are better able to improve the mean mathematics performance of their students. Furthermore, younger teachers themselves perform better on subject tests than do their older counterparts. Identical models are run for Sub Saharan countries bordering on South Africa, as well for Kenya and the strong relationship between teacher age and student performance is not observed. Similarly, the model is run for South Africa using data from SACMEQ II (conducted in 2002 and the relationship between teacher age and student performance is also not observed. It must be noted that South African teachers were not tested in SACMEQ II so it was not possible to observe differences in subject knowledge amongst teachers in different cohorts and it was not possible to control for teachers’ level of subject knowledge when observing the relationship between teacher age and student performance. Changes in teacher education in the late 1990s and early 2000s may explain the differences in the performance of younger teachers relative to their older counterparts observed in the later dataset.
Addressing the Influence of Hidden State on Wireless Network Optimizations using Performance Maps
DEFF Research Database (Denmark)
Højgaard-Hansen, Kim; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter
2015-01-01
be used to optimize the use of the wireless net- work by predicting future network performance and scheduling the net- work communication for certain applications on mobile devices. However, other important factors influence the performance of the wireless communication such as changes in the propagation...... environment and resource sharing. In this work we extend the framework of performance maps for wireless networks by introducing network state as an abstraction for all other factors than location that influence the performance. Since network state might not always be directly observable the framework......Performance of wireless connectivity for network client devices is location dependent. It has been shown that it can be beneficial to collect network performance metrics along with location information to generate maps of the location dependent network performance. These performance maps can...
MaMR: High-performance MapReduce programming model for material cloud applications
Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng
2017-02-01
With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.
International Nuclear Information System (INIS)
Uozumi, Satoru
2011-01-01
The scintillator-strip electromagnetic calorimeter (ScECAL) is one of fine granular calorimeters proposed to realize Particle Flow Algorithm for the International Linear Collider experiment. The ScECAL is a sandwitch calorimeter with tungsten and scintillator layers, where the scintillator layer consists of plastic scintillator strips which size of 1 cm x 4.5 cm x 0.2 cm with a small photo-sensor (MPPC) attached at the its edge. In alternate scintillator layers, strips are orthogonally aligned to make a virtual 1x 1 cm 2 cell with its crossing area. To establish the ScECAL technology, we have built a prototype of the ScECAL which consists of 30 layers of tungsten and scintillator layers with 2160 scintillator strips in total. In 2008 and 2009 the beam test has been performed at Fermilab meson test beam line to evaluate performance of the ScECAL prototype with various types of beams ranging 1 to 32 GeV. As a preliminary result of the beam test in 2008, we have obtained linearity of energy measurement less than 6% from the perfect linear response. Energy resolution is measured to be σ/E(15.15±0.03)%/√E+(1.44±0.02)%. Although detailed analyses are still ongoing, those results already establishes feasibility of the ScECAL as the fine granular calorimeter. However as the next step to precisely measure even higher energy jets, we will proceed to even more finely segmented calorimeter with 5 mm width scintillator strips.
Stability Criterion of Linear Stochastic Systems Subject to Mixed H2/Passivity Performance
Directory of Open Access Journals (Sweden)
Cheung-Chieh Ku
2015-01-01
Full Text Available The H2 control scheme and passivity theory are applied to investigate the stability criterion of continuous-time linear stochastic system subject to mixed performance. Based on the stochastic differential equation, the stochastic behaviors can be described as multiplicative noise terms. For the considered system, the H2 control scheme is applied to deal with the problem on minimizing output energy. And the asymptotical stability of the system can be guaranteed under desired initial conditions. Besides, the passivity theory is employed to constrain the effect of external disturbance on the system. Moreover, the Itô formula and Lyapunov function are used to derive the sufficient conditions which are converted into linear matrix inequality (LMI form for applying convex optimization algorithm. Via solving the sufficient conditions, the state feedback controller can be established such that the asymptotical stability and mixed performance of the system are achieved in the mean square. Finally, the synchronous generator system is used to verify the effectiveness and applicability of the proposed design method.
The evaluation of speed skating helmet performance through peak linear and rotational accelerations.
Karton, Clara; Rousseau, Philippe; Vassilyadi, Michael; Hoshizaki, Thomas Blaine
2014-01-01
Like many sports involving high speeds and body contact, head injuries are a concern for short track speed skating athletes and coaches. While the mandatory use of helmets has managed to nearly eliminate catastrophic head injuries such as skull fractures and cerebral haemorrhages, they may not be as effective at reducing the risk of a concussion. The purpose of this study was to evaluate the performance characteristics of speed skating helmets with respect to managing peak linear and peak rotational acceleration, and to compare their performance against other types of helmets commonly worn within the speed skating sport. Commercially available speed skating, bicycle and ice hockey helmets were evaluated using a three-impact condition test protocol at an impact velocity of 4 m/s. Two speed skating helmet models yielded mean peak linear accelerations at a low-estimated probability range for sustaining a concussion for all three impact conditions. Conversely, the resulting mean peak rotational acceleration values were all found close to the high end of a probability range for sustaining a concussion. A similar tendency was observed for the bicycle and ice hockey helmets under the same impact conditions. Speed skating helmets may not be as effective at managing rotational acceleration and therefore may not successfully protect the user against risks associated with concussion injuries.
Thermal performance of a linear Fresnel reflector solar concentrator PV/T energy systems
Energy Technology Data Exchange (ETDEWEB)
Gomaa, Mohamed R. [State Engineering University of Armenia (Armenia)], E-Mail: Dmoh_elbehary@yahoo.com
2011-07-01
This is a report on an investigation of photovoltaic/thermal (PV/T) collectors. Solar energy conversion efficiency was increased by taking advantage of PV/T collectors and low solar concentration technologies, combined into a PV/T system operated at elevated temperature. The main novelty is the coupling of a linear Fresnel mirror reflecting concentrator with a channel PV/T collector. Concentrator PV/T collectors can function at temperatures over 100 degrees celsius, and thus thermal energy can be made to drive processes such as refrigeration, desalination and steam production. Solar system analytical thermal performance gives efficiency values over 60%. Combined electric and thermal (CET) efficiency is high. A combined electric and heat power for the linear fresnel reflector approach that employs high performance CPV technology to produce both electricity and thermal energy at low to medium temperatures is presented. A well-functioning PV/T system can be designed and constructed with low concentration and a total efficiency of nearly 80% can be attained.
Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka
2013-01-01
Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383
High performance waveguide-coupled Ge-on-Si linear mode avalanche photodiodes.
Martinez, Nicholas J D; Derose, Christopher T; Brock, Reinhard W; Starbuck, Andrew L; Pomerene, Andrew T; Lentine, Anthony L; Trotter, Douglas C; Davids, Paul S
2016-08-22
We present experimental results for a selective epitaxially grown Ge-on-Si separate absorption and charge multiplication (SACM) integrated waveguide coupled avalanche photodiode (APD) compatible with our silicon photonics platform. Epitaxially grown Ge-on-Si waveguide-coupled linear mode avalanche photodiodes with varying lateral multiplication regions and different charge implant dimensions are fabricated and their illuminated device characteristics and high-speed performance is measured. We report a record gain-bandwidth product of 432 GHz for our highest performing waveguide-coupled avalanche photodiode operating at 1510nm. Bit error rate measurements show operation with BER-12, in the range from -18.3 dBm to -12 dBm received optical power into a 50 Ω load and open eye diagrams with 13 Gbps pseudo-random data at 1550 nm.
Driving performance of a two-dimensional homopolar linear DC motor
Energy Technology Data Exchange (ETDEWEB)
Wang, Y.; Yamaguchi, M.; Kano, Y. [Tokyo University of Agriculture and Technology, Tokyo (Japan)
1998-05-01
This paper presents a novel two-dimensional homopolar linear de motor (LDM) which can realize two-dimensional (2-D) motion. For position control purposes, two kinds of position detecting methods are proposed. The position in one position is detected by means of a capacitive sensor which makes the output of the sensor partially immune to the variation of the gap between electrodes. The position in the other direction is achieved by exploiting the position dependent property of the driving coil inductance, instead of using an independent sensor. The position control is implemented on the motor and 2-D tracking performance is analyzed. Experiments show that the motor demonstrates satisfactory driving performance, 2-D tracking error being within 5.5% when the angular frequency of reference signal is 3.14 rad./s. 7 refs., 17 figs., 2 tabs.
TBM performance prediction in Yucca Mountain welded tuff from linear cutter tests
International Nuclear Information System (INIS)
Gertsch, R.; Ozdemir, L.; Gertsch, L.
1992-01-01
This paper discusses performance prediction which were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah String welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. The result show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The result also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength
Development and performance validation of a cryogenic linear stage for SPICA-SAFARI verification
Ferrari, Lorenza; Smit, H. P.; Eggens, M.; Keizer, G.; de Jonge, A. W.; Detrain, A.; de Jonge, C.; Laauwen, W. M.; Dieleman, P.
2014-07-01
In the context of the SAFARI instrument (SpicA FAR-infrared Instrument) SRON is developing a test environment to verify the SAFARI performance. The characterization of the detector focal plane will be performed with a backilluminated pinhole over a reimaged SAFARI focal plane by an XYZ scanning mechanism that consists of three linear stages stacked together. In order to reduce background radiation that can couple into the high sensitivity cryogenic detectors (goal NEP of 2•10-19 W/√Hz and saturation power of few femtoWatts) the scanner is mounted inside the cryostat in the 4K environment. The required readout accuracy is 3 μm and reproducibility of 1 μm along the total travel of 32 mm. The stage will be operated in "on the fly" mode to prevent vibrations of the scanner mechanism and will move with a constant speed varying from 60 μm/s to 400 μm/s. In order to meet the requirements of large stroke, low dissipation (low friction) and high accuracy a DC motor plus spindle stage solution has been chosen. In this paper we will present the stage design and stage characterization, describing also the measurements setup. The room temperature performance has been measured with a 3D measuring machine cross calibrated with a laser interferometer and a 2-axis tilt sensor. The low temperature verification has been performed in a wet 4K cryostat using a laser interferometer for measuring the linear displacements and a theodolite for measuring the angular displacements. The angular displacements can be calibrated with a precision of 4 arcsec and the position could be determined with high accuracy. The presence of friction caused higher values of torque than predicted and consequently higher dissipation. The thermal model of the stage has also been verified at 4K.
Wu, Hung-Yi
2012-08-01
This study presents a structural evaluation methodology to link key performance indicators (KPIs) into a strategy map of the balanced scorecard (BSC) for banking institutions. Corresponding with the four BSC perspectives (finance, customer, internal business process, and learning and growth), the most important evaluation indicators of banking performance are synthesized from the relevant literature and screened by a committee of experts. The Decision Making Trial and Evaluation Laboratory (DEMATEL) method, a multiple criteria analysis tool, is then employed to determine the causal relationships between the KPIs, to identify the critical central and influential factors, and to establish a visualized strategy map with logical links to improve banking performance. An empirical application is provided as an example. According to the expert evaluations, the three most essential KPIs for banking performance are customer satisfaction, sales performance, and customer retention rate. The DEMATEL results demonstrate a clear road map to assist management in prioritizing the performance indicators and in focusing attention on the strategy-related activities of the crucial indicators. According to the constructed strategy map, management could better invest limited resources in the areas that need improvement most. Although these strategy maps of the BSC are not universal, the research results show that the presented approach is an objective and feasible way to construct strategy maps more justifiably. The proposed framework can be applicable to institutions in other industries as well. Copyright © 2011 Elsevier Ltd. All rights reserved.
Bearing Performance Degradation Assessment Using Linear Discriminant Analysis and Coupled HMM
International Nuclear Information System (INIS)
Liu, T; Chen, J; Zhou, X N; Xiao, W B
2012-01-01
Bearing is one of the most important units in rotary machinery, its performance may vary significantly under different working stages. Thus it is critical to choose the most effective features for bearing performance degradation prediction. Linear Discriminant Analysis (LDA) is a useful method in finding few feature's dimensions that best discriminate a set of features extracted from original vibration signals. Another challenge in bearing performance degradation is how to build a model to recognize the different conditions with the data coming from different monitoring channels. In this paper, coupled hidden Markov models (CHMM) is presented to model interacting processes which can overcome the defections of the HMM. Because the input data in CHMM are collected by several sensors, and the interacting information can be fused by coupled modalities, it is more effective than HMM which used only one state chain. The model can be used in estimating the bearing performance degradation states according to several observation data. When becoming degradation pattern recognition, the new observation features should be input into the pre-trained CHMM and calculate the performance index (PI) of the outputs, the changing of PI could be used to describe the different degradation level of the bearings. The results show that PI will decline with the increase of the bearing degradation. Assessment results of the whole life time experimental bearing signals validate the feasibility and effectiveness of this method.
A network application for modeling a centrifugal compressor performance map
Nikiforov, A.; Popova, D.; Soldatova, K.
2017-08-01
The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.
Factors Affecting Students' Performance and Practice on Map ...
African Journals Online (AJOL)
The percentage is used to show that the level of the performance and achievement of the students. The findings suggest that possible intervention to help the students score high academic achievement should focus on teachers' training, enabling students to work hard persevere to succeed, identifying effective study ...
Mapping the Developmental Constraints on Working Memory Span Performance
Bayliss, Donna M.; Jarrold, Christopher; Baddeley, Alan D.; Gunn, Deborah M.; Leigh, Eleanor
2004-01-01
This study investigated the constraints underlying developmental improvements in complex working memory span performance among 120 children of between 6 and 10 years of age. Independent measures of processing efficiency, storage capacity, rehearsal speed, and basic speed of processing were assessed to determine their contribution to age-related…
AAPM Medical Physics Practice Guideline 8.a.: Linear accelerator performance tests.
Smith, Koren; Balter, Peter; Duhon, John; White, Gerald A; Vassy, David L; Miller, Robin A; Serago, Christopher F; Fairobent, Lynne A
2017-07-01
The purpose of this guideline is to provide a list of critical performance tests in order to assist the Qualified Medical Physicist (QMP) in establishing and maintaining a safe and effective quality assurance (QA) program. The performance tests on a linear accelerator (linac) should be selected to fit the clinical patterns of use of the accelerator and care should be given to perform tests which are relevant to detecting errors related to the specific use of the accelerator. A risk assessment was performed on tests from current task group reports on linac QA to highlight those tests that are most effective at maintaining safety and quality for the patient. Recommendations are made on the acquisition of reference or baseline data, the establishment of machine isocenter on a routine basis, basing performance tests on clinical use of the linac, working with vendors to establish QA tests and performing tests after maintenance. The recommended tests proposed in this guideline were chosen based on the results from the risk analysis and the consensus of the guideline's committee. The tests are grouped together by class of test (e.g., dosimetry, mechanical, etc.) and clinical parameter tested. Implementation notes are included for each test so that the QMP can understand the overall goal of each test. This guideline will assist the QMP in developing a comprehensive QA program for linacs in the external beam radiation therapy setting. The committee sought to prioritize tests by their implication on quality and patient safety. The QMP is ultimately responsible for implementing appropriate tests. In the spirit of the report from American Association of Physicists in Medicine Task Group 100, individual institutions are encouraged to analyze the risks involved in their own clinical practice and determine which performance tests are relevant in their own radiotherapy clinics. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on
Mumtaz, Ubaidullah; Ali, Yousaf; Petrillo, Antonella
2018-05-15
The increase in the environmental pollution is one of the most important topic in today's world. In this context, the industrial activities can pose a significant threat to the environment. To manage problems associate to industrial activities several methods, techniques and approaches have been developed. Green supply chain management (GSCM) is considered one of the most important "environmental management approach". In developing countries such as Pakistan the implementation of GSCM practices is still in its initial stages. Lack of knowledge about its effects on economic performance is the reason because of industries fear to implement these practices. The aim of this research is to perceive the effects of GSCM practices on organizational performance in Pakistan. In this research the GSCM practices considered are: internal practices, external practices, investment recovery and eco-design. While, the performance parameters considered are: environmental pollution, operational cost and organizational flexibility. A set of hypothesis propose the effect of each GSCM practice on the performance parameters. Factor analysis and linear regression are used to analyze the survey data of Pakistani industries, in order to authenticate these hypotheses. The findings of this research indicate a decrease in environmental pollution and operational cost with the implementation of GSCM practices, whereas organizational flexibility has not improved for Pakistani industries. These results aim to help managers regarding their decision of implementing GSCM practices in the industrial sector of Pakistan. Copyright © 2017 Elsevier B.V. All rights reserved.
Mapping the developmental constraints on working memory span performance.
Bayliss, Donna M; Jarrold, Christopher; Baddeley, Alan D; Gunn, Deborah M; Leigh, Eleanor
2005-07-01
This study investigated the constraints underlying developmental improvements in complex working memory span performance among 120 children of between 6 and 10 years of age. Independent measures of processing efficiency, storage capacity, rehearsal speed, and basic speed of processing were assessed to determine their contribution to age-related variance in complex span. Results showed that developmental improvements in complex span were driven by 2 age-related but separable factors: 1 associated with general speed of processing and 1 associated with storage ability. In addition, there was an age-related contribution shared between working memory, processing speed, and storage ability that was important for higher level cognition. These results pose a challenge for models of complex span performance that emphasize the importance of processing speed alone.
Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison
2017-11-01
Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.
O'Donnell, Shannon; Tavares, Francisco; McMaster, Daniel; Chambers, Samuel; Driller, Matthew
2018-01-01
The current study aimed to assess the validity and test-retest reliability of a linear position transducer when compared to a force plate through a counter-movement jump in female participants. Twenty-seven female recreational athletes (19 ± 2 years) performed three counter-movement jumps simultaneously using the linear position transducer and…
Benchmarking the mARC performance. Treatment time and dosimetric linearity
Energy Technology Data Exchange (ETDEWEB)
Dzierma, Yvonne; Nuesken, Frank; Licht, Norbert; Ruebe, Christian [Universitaetsklinikum des Saarlandes, Homburg/Saar (Germany). Klinik fuer Strahlentherapie und Radioonkologie
2016-07-01
The mARC technique is a hybrid rotational IMRT modality operating in ''burst mode''. While it is generally assumed that it will be slower than VMAT, the real limits of operation have not been defined so far. We here present the first systematic study of the technical limits on mARC treatment. The following scenarios are considered: 18, 30, 36 or 45 arclets per rotation (spacing between 20 and 8 ), flat and flattening-filter-free (FFF) energy, arclet width 4 or 2 , from 1 MU/arclet to 1000 MU/plan. All scenarios are irradiated, treatment times are measured and treatment parameters reported. Dose linearity was assessed by point dose measurements of the 18 arclet plans with 1-30 MU per arclet. Minimum treatment times (no MLC movement, few MUs) depend strongly on the number of arclets per rotation (1 minute for 18 arclets to 1:50 min for 45 arclets), and rise linearly with MU/arclets after a given cut-off value depending on scenario, arclet width and available maximum dose rate. MLC movement adds up to 2 minutes of treatment time, but generally less (ca. 45 seconds in realistic plans). The rules by which irradiation parameters are selected by the firmware can be partly discovered. The choice of dose rate is most clearly defined. For the flat 6 MV energy, the highest available dose rate (300 MU/min) is always applied. For FFF 7 MV dose rate is reduced for arclets with few MUs, so that an arclet is irradiated in no less than 0.3 s. Only for the case of 1 MU/arclet can this constraint not be met (the technical limit on the dose rate if 500 MU/min for FFF 7 MV). In this case, dosimetric linearity is reduced. In all other instances, deviations from linearity at low MU remain below 2%. Treatment times of down to 90 seconds are technically achievable for treatment with FFF beams using up to 36 arclets per rotation (arclet spacing every 10 ) for up to 900 MU/plan, comparable to VMAT treatment times. The values provided here are meant to serve as a reference
Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures
Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.
2016-05-01
Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the
Energy Technology Data Exchange (ETDEWEB)
Othman, Ahmed E. [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); Eberhard Karls University Tuebingen, University Hospital Tuebingen, Department for Diagnostic and Interventional Radiology, Tuebingen (Germany); Afat, Saif; Nikoubashman, Omid; Mueller, Marguerite; Wiesmann, Martin; Brockmann, Carolin [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); Schubert, Gerrit Alexander [RWTH Aachen University, Department of Neurosurgery, Aachen (Germany); Bier, Georg [Eberhard Karls University Tuebingen, University Hospital Tuebingen, Department for Diagnostic and Interventional Neuroradiology, Tuebingen (Germany); Brockmann, Marc A. [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); University Hospital Mainz, Department of Neuroradiology, Mainz (Germany)
2016-08-15
In this study, we aimed to evaluate the diagnostic performance of different volume perfusion CT (VPCT) maps regarding the detection of cerebral vasospasm compared to angiographic findings. Forty-one datasets of 26 patients (57.5 ± 10.8 years, 18 F) with subarachnoid hemorrhage and suspected cerebral vasospasm, who underwent VPCT and angiography within 6 h, were included. Two neuroradiologists independently evaluated the presence and severity of vasospasm on perfusion maps on a 3-point Likert scale (0 - no vasospasm, 1 - vasospasm affecting <50 %, 2 - vasospasm affecting >50 % of vascular territory). A third neuroradiologist independently assessed angiography for the presence and severity of vasospasm on a 3-point Likert scale (0 - no vasospasm, 1 - vasospasm affecting < 50 %, 2 - vasospasm affecting > 50 % of vessel diameter). Perfusion maps of cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT), and time to drain (TTD) were evaluated regarding diagnostic accuracy for cerebral vasospasm with angiography as reference standard. Correlation analysis of vasospasm severity on perfusion maps and angiographic images was performed. Furthermore, inter-reader agreement was assessed regarding findings on perfusion maps. Diagnostic accuracy for TTD and MTT was significantly higher than for all other perfusion maps (TTD, AUC = 0.832; MTT, AUC = 0.791; p < 0.001). TTD revealed higher sensitivity than MTT (p = 0.007). The severity of vasospasm on TTD maps showed significantly higher correlation levels with angiography than all other perfusion maps (p ≤ 0.048). Inter-reader agreement was (almost) perfect for all perfusion maps (kappa ≥ 0.927). The results of this study indicate that TTD maps have the highest sensitivity for the detection of cerebral vasospasm and highest correlation with angiography regarding the severity of vasospasm. (orig.)
Ying, Guo; Jianping, Xie; Haiyun, Luo; Xia, Li; Jianyu, Yang; Qun, Xuan; Jianyun, Yu
2017-07-01
To determine whether students using mind maps would improve their performance in a final examination at the end of lecture-based pharmacology course. Aquasi-experimental study. Kunming Medical University, from September 2014 to January 2015. One hundred and twenty-two (122) third year undergraduate medical students, starting a 48-hour lecturebased pharmacology course, volunteered to use mind maps as one of their study strategies (intervention group), while the remaining 100 students in the class continued to use their usual study strategies (control group) over the duration of the course. The performance of both groups in the final course examination was compared. Students in the intervention group also completed a questionnaire on the usefulness of mind maps during the course and in preparation for the final examination. The students' performance of intervention group was superior to performance of the control group in all parts of a multi-modal final examination. For the multiple choice questions and comprehensive scores, average marks of 45.97 ±7.22 and 68.07 ±12.77, respectively were acquired by the control group, and 51.77 ±4.95 (pcontrol group, and were all significantly higher at 8.00 (4.00) (p=0.024), 10.00 (2.00) (pmind maps helped them to prepare more efficiently for the final exam; 90.91% believed that mind maps helped them to better understand all of pharmacology. Ninety-one percent also thought that mind maps would help them to better understand other disciplines, and 86.36% students would like the lecturers to utilize mind mapping as an alternative to conventional teaching formats, such as the use of Power Point. The addition of mind maps to students' study of pharmacology at Kunming Medical University improved their performance in all aspects of a multi-modal final examination.
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
DEFF Research Database (Denmark)
Fitzek, Frank; Toth, Tamas; Szabados, Áron
2014-01-01
This paper advocates the use of random linear network coding for storage in distributed clouds in order to reduce storage and traffic costs in dynamic settings, i.e. when adding and removing numerous storage devices/clouds on-the-fly and when the number of reachable clouds is limited. We introduce...... various network coding approaches that trade-off reliability, storage and traffic costs, and system complexity relying on probabilistic recoding for cloud regeneration. We compare these approaches with other approaches based on data replication and Reed-Solomon codes. A simulator has been developed...... to carry out a thorough performance evaluation of the various approaches when relying on different system settings, e.g., finite fields, and network/storage conditions, e.g., storage space used per cloud, limited network use, and limited recoding capabilities. In contrast to standard coding approaches, our...
Performance analysis of flow lines with non-linear flow of material
Helber, Stefan
1999-01-01
Flow line design is one of the major tasks in production management. The decision to install a set of machines and buffers is often highly irreversible. It determines both cost and revenue to a large extent. In order to assess the economic impact of any possible flow line design, production rates and inventory levels have to be estimated. These performance measures depend on the allocation of buffers whenever the flow of material is occasionally disrupted, for example due to machine failures or quality problems. The book describes analytical methods that can be used to evaluate flow lines much faster than with simulation techniques. Based on these fast analytical techniques, it is possible to determine a flow line design that maximizes the net present value of the flow line investment. The flow of material through the line may be non-linear, for example due to assembly operations or quality inspections.
International Nuclear Information System (INIS)
Malik, M.; Alam, S.; Irfan, M.; Hassan, Z.
2006-01-01
PVD based hard coatings have remarkable achievements in order to improve Tribological and surface properties of coating tools and dies. As PVD based hard coatings have a wide range of industrial applications especially in aerospace and automobile parts where they met different chemical attacks and in order to improve industrial performance these coatings must provide an excellent resistance against corrosion, high temperature oxidation and chemical reaction. This paper focuses on study of behaviour of PVD based hard coatings under different corrosive environments like as H/sub 2/SO/sub 4/, HCl, NaCl, KCl, NaOH etc. Corrosion rate was calculate under linear sweep voltammetry method where the Tafel extrapolation curves used for continuously monitoring the corrosion rate. The results show that these coatings have an excellent resistance against chemical attack. (author)
A Linear Algebra Framework for Static High Performance Fortran Code Distribution
Directory of Open Access Journals (Sweden)
Corinne Ancourt
1997-01-01
Full Text Available High Performance Fortran (HPF was developed to support data parallel programming for single-instruction multiple-data (SIMD and multiple-instruction multiple-data (MIMD machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.
Directory of Open Access Journals (Sweden)
Chin-Yuan Lai
2015-06-01
Full Text Available Representation is important for problem solving. This study examined the effects of different forms of concept maps on nursing students’ performances of conceptualizing psychiatric patients’ problems. A quasi-experimental research design was adopted to investigate the effects. The participants were two classes of fourth-year students who were enrolled in a psychiatric nursing course in a nursing college. One class with 48 students served as the experimental group, and used web-based concepts map to represent patients’ problem. The other class with 50 students served as the control group, and used the traditional hierarchical concept mapping method. The results indicated that the concept maps drawn by the experimental group showed more nursing problem, supporting evidence, and relationships between nursing problems than those drawn by the control group. The web-based concept maps helped expand students’ thinking and promoted their causality reasoning. Different concept-map representation tools affected the process of students’ problem solving. The experimental learning activities promoted students’ understanding of concepts and ways of psychiatric patients’ care taking. To understand the effects of other types of concept maps, future research may guide students in using different forms of concept maps throughout the stages of nursing process.
Centralized motion control of a linear tooth belt drive: Analysis of the performance and limitations
Energy Technology Data Exchange (ETDEWEB)
Jokinen, M.
2010-07-01
A centralized robust position control for an electrical driven tooth belt drive is designed in this doctoral thesis. Both a cascaded control structure and a PID based position controller are discussed. The performance and the limitations of the system are analyzed and design principles for the mechanical structure and the control design are given. These design principles are also suitable for most of the motion control applications, where mechanical resonance frequencies and control loop delays are present. One of the major challenges in the design of a controller for machinery applications is that the values of the parameters in the system model (parameter uncertainty) or the system model it self (non-parametric uncertainty) are seldom known accurately in advance. In this thesis a systematic analysis of the parameter uncertainty of the linear tooth beltdrive model is presented and the effect of the variation of a single parameter on the performance of the total system is shown. The total variation of the model parameters is taken into account in the control design phase using a Quantitative Feedback Theory (QFT). The thesis also introduces a new method to analyze reference feedforward controllers applying the QFT. The performance of the designed controllers is verified by experimental measurements. The measurements confirm the control design principles that are given in this thesis. (orig.)
International Nuclear Information System (INIS)
Gertsch, R.; Ozdemir, L.
1992-09-01
The performances of mechanical excavators are predicted for excavations in welded tuff. Emphasis is given to tunnel boring machine evaluations based on linear cutting machine test data obtained on samples of Topopah Spring welded tuff. The tests involve measurement of forces as cutters are applied to the rock surface at certain spacing and penetrations. Two disc and two point-attack cutters representing currently available technology are thus evaluated. The performance predictions based on these direct experimental measurements are believed to be more accurate than any previous values for mechanical excavation of welded tuff. The calculations of performance are predicated on minimizing the amount of energy required to excavate the welded tuff. Specific energy decreases with increasing spacing and penetration, and reaches its lowest at the widest spacing and deepest penetration used in this test program. Using the force, spacing, and penetration data from this experimental program, the thrust, torque, power, and rate of penetration are calculated for several types of mechanical excavators. The results of this study show that the candidate excavators will require higher torque and power than heretofore estimated
Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.
Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah
2012-01-01
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.
de Souza Baptista, Roberto; Bo, Antonio P L; Hayashibe, Mitsuhiro
2017-06-01
Performance assessment of human movement is critical in diagnosis and motor-control rehabilitation. Recent developments in portable sensor technology enable clinicians to measure spatiotemporal aspects to aid in the neurological assessment. However, the extraction of quantitative information from such measurements is usually done manually through visual inspection. This paper presents a novel framework for automatic human movement assessment that executes segmentation and motor performance parameter extraction in time-series of measurements from a sequence of human movements. We use the elements of a Switching Linear Dynamic System model as building blocks to translate formal definitions and procedures from human movement analysis. Our approach provides a method for users with no expertise in signal processing to create models for movements using labeled dataset and later use it for automatic assessment. We validated our framework on preliminary tests involving six healthy adult subjects that executed common movements in functional tests and rehabilitation exercise sessions, such as sit-to-stand and lateral elevation of the arms and five elderly subjects, two of which with limited mobility, that executed the sit-to-stand movement. The proposed method worked on random motion sequences for the dual purpose of movement segmentation (accuracy of 72%-100%) and motor performance assessment (mean error of 0%-12%).
Stability and performance analysis of a jump linear control system subject to digital upsets
Wang, Rui; Sun, Hui; Ma, Zhen-Yang
2015-04-01
This paper focuses on the methodology analysis for the stability and the corresponding tracking performance of a closed-loop digital jump linear control system with a stochastic switching signal. The method is applied to a flight control system. A distributed recoverable platform is implemented on the flight control system and subject to independent digital upsets. The upset processes are used to stimulate electromagnetic environments. Specifically, the paper presents the scenarios that the upset process is directly injected into the distributed flight control system, which is modeled by independent Markov upset processes and independent and identically distributed (IID) processes. A theoretical performance analysis and simulation modelling are both presented in detail for a more complete independent digital upset injection. The specific examples are proposed to verify the methodology of tracking performance analysis. The general analyses for different configurations are also proposed. Comparisons among different configurations are conducted to demonstrate the availability and the characteristics of the design. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61403395), the Natural Science Foundation of Tianjin, China (Grant No. 13JCYBJC39000), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, China, the Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation of China (Grant No. 104003020106), and the Fund for Scholars of Civil Aviation University of China (Grant No. 2012QD21x).
Linear models to perform treaty verification tasks for enhanced information security
International Nuclear Information System (INIS)
MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; Hilton, Nathan R.; Marleau, Peter A.
2017-01-01
Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensional vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.
Linear models to perform treaty verification tasks for enhanced information security
Energy Technology Data Exchange (ETDEWEB)
MacGahan, Christopher J., E-mail: cmacgahan@optics.arizona.edu [College of Optical Sciences, The University of Arizona, 1630 E. University Blvd, Tucson, AZ 85721 (United States); Sandia National Laboratories, Livermore, CA 94551 (United States); Kupinski, Matthew A. [College of Optical Sciences, The University of Arizona, 1630 E. University Blvd, Tucson, AZ 85721 (United States); Brubaker, Erik M.; Hilton, Nathan R.; Marleau, Peter A. [Sandia National Laboratories, Livermore, CA 94551 (United States)
2017-02-01
Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensional vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.
Abdelfattah, Ahmad
2015-01-15
High performance computing (HPC) platforms are evolving to more heterogeneous configurations to support the workloads of various applications. The current hardware landscape is composed of traditional multicore CPUs equipped with hardware accelerators that can handle high levels of parallelism. Graphical Processing Units (GPUs) are popular high performance hardware accelerators in modern supercomputers. GPU programming has a different model than that for CPUs, which means that many numerical kernels have to be redesigned and optimized specifically for this architecture. GPUs usually outperform multicore CPUs in some compute intensive and massively parallel applications that have regular processing patterns. However, most scientific applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved. This dissertation presents a uniform design strategy for optimizing critical memory-bound kernels on GPUs. Based on hierarchical register blocking, double buffering and latency hiding techniques, this strategy leverages the performance of a wide range of standard numerical kernels found in dense and sparse linear algebra libraries. The work presented here focuses on matrix-vector multiplication kernels (MVM) as repre- sentative and most important memory-bound operations in this context. Each kernel inherits the benefits of the proposed strategies. By exposing a proper set of tuning parameters, the strategy is flexible enough to suit different types of matrices, ranging from large dense matrices, to sparse matrices with dense block structures, while high performance is maintained. Furthermore, the tuning parameters are used to maintain the relative performance across different GPU architectures. Multi-GPU acceleration is proposed to scale the performance on several devices. The
Energy Technology Data Exchange (ETDEWEB)
Kazama, Masahiro; Takeda, Tohoru; Itai, Yuji [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine; Akiba, Masahiro; Yuasa, Tetsuya; Hyodo, Kazuyuki; Ando, Masami; Akatsuka, Takao
1997-09-01
Monochromatic x-ray computed tomography (CT) using synchrotron radiation (SR) is being developed for detection of non-radioactive contrast materials at low concentration for application in clinical diagnosis. A new SR-CT system with improved contrast resolution, was constructed using a linear array detector which provides wide dynamic ranges and a double monochromator. The performance of this system was evaluated in a phantom and a rat model of brain ischemia. This system consists of a silicon (111) double crystal monochromator, an x-ray shutter, an ionization chamber, x-ray slits, a scanning table for the target organ, and an x-ray linear array detector. The research was carried out at the BLNE-5A bending magnet beam line of the Tristan Accumulation Ring in KEK, Japan. In this experiment, the reconstructed image of the spatial-resolution phantom clearly showed the 1 mm holes. At 1 mm slice thickness, the above K-edge image of the phantom showed contrast resolution at the concentration of 200 {mu}g/ml iodine-based contrast materials whereas the K-edge energy subtraction image showed contrast resolution at the concentration of 500 {mu}g/ml contrast materials. The cerebral arteries filled with iodine microspheres were clearly revealed, and the ischemic regions at the right temporal lobe and frontal lobe were depicted as non-vascular regions. The measured minimal detectable concentration of iodine on the above K-edge image is about 6 times higher than the expected value of 35.3 {mu}g/ml because of the high dark current of this detector. Thus, the use of a CCD detector which is cooled by liquid nitrogen to improve the dynamic range of the detector, is being under construction. (author)
International Nuclear Information System (INIS)
Littlefield, R.J.; Maschhoff, K.J.
1991-04-01
Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab
TRM performance prediction in Yucca Mountain welded tuff from linear cutter tests
International Nuclear Information System (INIS)
Gertsch, R.; Ozdemir, L.; Gertsch, L.
1992-01-01
Performance predictions were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah Spring welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. Guidelines were developed for the optimal design of the TBM cutterhead to achieve high production rates at the lowest possible excavation costs. The results show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The results also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength
The Performance Analysis of AN Indoor Mobile Mapping System with Rgb-D Sensor
Tsai, G. J.; Chiang, K. W.; Chu, C. H.; Chen, Y. L.; El-Sheimy, N.; Habib, A.
2015-08-01
Over the years, Mobile Mapping Systems (MMSs) have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM). The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG) performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU), the Kinect RGB-D sensor and light detection, ranging (LIDAR) and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.
THE PERFORMANCE ANALYSIS OF AN INDOOR MOBILE MAPPING SYSTEM WITH RGB-D SENSOR
Directory of Open Access Journals (Sweden)
G. J. Tsai
2015-08-01
Full Text Available Over the years, Mobile Mapping Systems (MMSs have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM. The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU, the Kinect RGB-D sensor and light detection, ranging (LIDAR and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.
Capability Assessment and Performance Metrics for the Titan Multispectral Mapping Lidar
Directory of Open Access Journals (Sweden)
Juan Carlos Fernandez-Diaz
2016-11-01
Full Text Available In this paper we present a description of a new multispectral airborne mapping light detection and ranging (lidar along with performance results obtained from two years of data collection and test campaigns. The Titan multiwave lidar is manufactured by Teledyne Optech Inc. (Toronto, ON, Canada and emits laser pulses in the 1550, 1064 and 532 nm wavelengths simultaneously through a single oscillating mirror scanner at pulse repetition frequencies (PRF that range from 50 to 300 kHz per wavelength (max combined PRF of 900 kHz. The Titan system can perform simultaneous mapping in terrestrial and very shallow water environments and its multispectral capability enables new applications, such as the production of false color active imagery derived from the lidar return intensities and the automated classification of target and land covers. Field tests and mapping projects performed over the past two years demonstrate capabilities to classify five land covers in urban environments with an accuracy of 90%, map bathymetry under more than 15 m of water, and map thick vegetation canopies at sub-meter vertical resolutions. In addition to its multispectral and performance characteristics, the Titan system is designed with several redundancies and diversity schemes that have proven to be beneficial for both operations and the improvement of data quality.
High-Performance Signal Detection for Adverse Drug Events using MapReduce Paradigm.
Fan, Kai; Sun, Xingzhi; Tao, Ying; Xu, Linhao; Wang, Chen; Mao, Xianling; Peng, Bo; Pan, Yue
2010-11-13
Post-marketing pharmacovigilance is important for public health, as many Adverse Drug Events (ADEs) are unknown when those drugs were approved for marketing. However, due to the large number of reported drugs and drug combinations, detecting ADE signals by mining these reports is becoming a challenging task in terms of computational complexity. Recently, a parallel programming model, MapReduce has been introduced by Google to support large-scale data intensive applications. In this study, we proposed a MapReduce-based algorithm, for common ADE detection approach, Proportional Reporting Ratio (PRR), and tested it in mining spontaneous ADE reports from FDA. The purpose is to investigate the possibility of using MapReduce principle to speed up biomedical data mining tasks using this pharmacovigilance case as one specific example. The results demonstrated that MapReduce programming model could improve the performance of common signal detection algorithm for pharmacovigilance in a distributed computation environment at approximately liner speedup rates.
Directory of Open Access Journals (Sweden)
Xiaocui Wu
2015-02-01
Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.
Peiró-Velert, Carmen; Valencia-Peris, Alexandra; González, Luis M; García-Massó, Xavier; Serra-Añó, Pilar; Devís-Devís, José
2014-01-01
Screen media usage, sleep time and socio-demographic features are related to adolescents' academic performance, but interrelations are little explored. This paper describes these interrelations and behavioral profiles clustered in low and high academic performance. A nationally representative sample of 3,095 Spanish adolescents, aged 12 to 18, was surveyed on 15 variables linked to the purpose of the study. A Self-Organizing Maps analysis established non-linear interrelationships among these variables and identified behavior patterns in subsequent cluster analyses. Topological interrelationships established from the 15 emerging maps indicated that boys used more passive videogames and computers for playing than girls, who tended to use mobile phones to communicate with others. Adolescents with the highest academic performance were the youngest. They slept more and spent less time using sedentary screen media when compared to those with the lowest performance, and they also showed topological relationships with higher socioeconomic status adolescents. Cluster 1 grouped boys who spent more than 5.5 hours daily using sedentary screen media. Their academic performance was low and they slept an average of 8 hours daily. Cluster 2 gathered girls with an excellent academic performance, who slept nearly 9 hours per day, and devoted less time daily to sedentary screen media. Academic performance was directly related to sleep time and socioeconomic status, but inversely related to overall sedentary screen media usage. Profiles from the two clusters were strongly differentiated by gender, age, sedentary screen media usage, sleep time and academic achievement. Girls with the highest academic results had a medium socioeconomic status in Cluster 2. Findings may contribute to establishing recommendations about the timing and duration of screen media usage in adolescents and appropriate sleep time needed to successfully meet the demands of school academics and to improve
Directory of Open Access Journals (Sweden)
Carmen Peiró-Velert
Full Text Available Screen media usage, sleep time and socio-demographic features are related to adolescents' academic performance, but interrelations are little explored. This paper describes these interrelations and behavioral profiles clustered in low and high academic performance. A nationally representative sample of 3,095 Spanish adolescents, aged 12 to 18, was surveyed on 15 variables linked to the purpose of the study. A Self-Organizing Maps analysis established non-linear interrelationships among these variables and identified behavior patterns in subsequent cluster analyses. Topological interrelationships established from the 15 emerging maps indicated that boys used more passive videogames and computers for playing than girls, who tended to use mobile phones to communicate with others. Adolescents with the highest academic performance were the youngest. They slept more and spent less time using sedentary screen media when compared to those with the lowest performance, and they also showed topological relationships with higher socioeconomic status adolescents. Cluster 1 grouped boys who spent more than 5.5 hours daily using sedentary screen media. Their academic performance was low and they slept an average of 8 hours daily. Cluster 2 gathered girls with an excellent academic performance, who slept nearly 9 hours per day, and devoted less time daily to sedentary screen media. Academic performance was directly related to sleep time and socioeconomic status, but inversely related to overall sedentary screen media usage. Profiles from the two clusters were strongly differentiated by gender, age, sedentary screen media usage, sleep time and academic achievement. Girls with the highest academic results had a medium socioeconomic status in Cluster 2. Findings may contribute to establishing recommendations about the timing and duration of screen media usage in adolescents and appropriate sleep time needed to successfully meet the demands of school academics and
Student performance and attitudes in a collaborative and flipped linear algebra course
Murphy, Julia; Chang, Jen-Mei; Suaray, Kagba
2016-07-01
Flipped learning is gaining traction in K-12 for enhancing students' problem-solving skills at an early age; however, there is relatively little large-scale research showing its effectiveness in promoting better learning outcomes in higher education, especially in mathematics classes. In this study, we examined the data compiled from both quantitative and qualitative measures such as item scores on a common final and attitude survey results between a flipped and a traditional Introductory Linear Algebra class taught by two individual instructors at a state university in California in Fall 2013. Students in the flipped class were asked to watch short video lectures made by the instructor and complete a short online quiz prior to each class attendance. The class time was completely devoted to problem solving in group settings where students were prompted to communicate their reasoning with proper mathematical terms and structured sentences verbally and in writing. Examination of the quality and depth of student responses from the common final exam showed that students in the flipped class produced more comprehensive and well-explained responses to the questions that required reasoning, creating examples, and more complex use of mathematical objects. Furthermore, students in the flipped class performed superiorly in the overall comprehension of the content with a 21% increase in the median final exam score. Overall, students felt more confident about their ability to learn mathematics independently, showed better retention of materials over time, and enjoyed the flipped experience.
Directory of Open Access Journals (Sweden)
C. Makendran
2015-01-01
Full Text Available Prediction models for low volume village roads in India are developed to evaluate the progression of different types of distress such as roughness, cracking, and potholes. Even though the Government of India is investing huge quantum of money on road construction every year, poor control over the quality of road construction and its subsequent maintenance is leading to the faster road deterioration. In this regard, it is essential that scientific maintenance procedures are to be evolved on the basis of performance of low volume flexible pavements. Considering the above, an attempt has been made in this research endeavor to develop prediction models to understand the progression of roughness, cracking, and potholes in flexible pavements exposed to least or nil routine maintenance. Distress data were collected from the low volume rural roads covering about 173 stretches spread across Tamil Nadu state in India. Based on the above collected data, distress prediction models have been developed using multiple linear regression analysis. Further, the models have been validated using independent field data. It can be concluded that the models developed in this study can serve as useful tools for the practicing engineers maintaining flexible pavements on low volume roads.
Corrosion Performance of Friction Stir Linear Lap Welded AM60B Joints
Kish, J. R.; Birbilis, N.; McNally, E. M.; Glover, C. F.; Zhang, X.; McDermid, J. R.; Williams, G.
2017-11-01
A corrosion investigation of friction stir linear lap welded AM60B joints used to fabricate an Mg alloy-intensive automotive front end sub-assembly was performed. The stir zone exhibited a slightly refined grain size and significant break-up and re-distribution of the divorced Mg17Al12 (β-phase) relative to the base material. Exposures in NaCl (aq) environments revealed that the stir zone was more susceptible to localized corrosion than the base material. Scanning vibrating electrode technique measurements revealed differential galvanic activity across the joint. Anodic activity was confined to the stir zone surface and involved initiation and lateral propagation of localized filaments. Cathodic activity was initially confined to the base material surface, but was rapidly modified to include the cathodically-activated corrosion products in the filament wake. Site-specific surface analyses revealed that the corrosion observed across the welded joint was likely linked to variations in Al distribution across the surface film/metal interface.
Fasmer, Ole Bernt; Mjeldheim, Kristin; Førland, Wenche; Hansen, Anita L; Syrstad, Vigdis Elin Giæver; Oedegaard, Ketil J; Berle, Jan Øystein
2016-08-11
Attention Deficit Hyperactivity Disorder (ADHD) is a heterogeneous disorder. Therefore it is important to look for factors that can contribute to better diagnosis and classification of these patients. The aims of the study were to characterize adult psychiatric out-patients with a mixture of mood, anxiety and attentional problems using an objective neuropsychological test of attention combined with an assessment of mood instability. Newly referred patients (n = 99; aged 18-65 years) requiring diagnostic evaluation of ADHD, mood or anxiety disorders were recruited, and were given a comprehensive diagnostic evaluation including the self-report form of the cyclothymic temperament scale and Conner's Continuous Performance Test II (CPT-II). In addition to the traditional measures from this test we have extracted raw data and analysed time series using linear and non-linear mathematical methods. Fifty patients fulfilled criteria for ADHD, while 49 did not, and were given other psychiatric diagnoses (clinical controls). When compared to the clinical controls the ADHD patients had more omission and commission errors, and higher reaction time variability. Analyses of response times showed higher values for skewness in the ADHD patients, and lower values for sample entropy and symbolic dynamics. Among the ADHD patients 59 % fulfilled criteria for a cyclothymic temperament, and this group had higher reaction time variability and lower scores on complexity than the group without this temperament. The CPT-II is a useful instrument in the assessment of ADHD in adult patients. Additional information from this test was obtained by analyzing response times using linear and non-linear methods, and this showed that ADHD patients with a cyclothymic temperament were different from those without this temperament.
Christman, Stephen D; Weaver, Ryan
2008-05-01
The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.
de Bruin, A.B.H.; Smits, N.; Rikers, R.M.J.P.; Schmidt, H.G.
2008-01-01
In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training,
International Nuclear Information System (INIS)
Marrero, Juan Carlos; Padrón, Edith; Rodríguez-Olmos, Miguel
2012-01-01
This paper addresses the problem of developing an extension of the Marsden–Weinstein reduction process to symplectic-like Lie algebroids, and in particular to the case of the canonical cover of a fiberwise linear Poisson structure, whose reduction process is the analog to cotangent bundle reduction in the context of Lie algebroids. Dedicated to the memory of Jerrold E Marsden (paper)
Scaffolding EFL Oral Performance through Story Maps and Podcasts and Students’ Attitudes toward it
Directory of Open Access Journals (Sweden)
Mohammed Pazhouhesh
2014-11-01
Full Text Available The present study sought to explore the impact of story maps and audio podcasts as scaffolds on oral proficiency of Iranian EFL learners. The quasi-experimental study was launched with 36 EFL undergraduates in three groups by adopting a counterbalanced 3 3 Latin squared design. All participants were indiscriminately, but in a specified order, exposed to the three treatment conditions of story retelling, story retelling plus story map, and story retelling plus podcast, and post-tested sequentially. The Latin square analysis of the oral assessment scale showed statistically meaningful differences under the treatment conditions for the groups. The post-hoc test also showed overachievements of the participants under the treatment conditions of story retelling plus story map and story retelling plus podcasts. The performance under podcast condition was significantly better than performances under the story map and short story conditions. The post-experiment opinion survey showed the learners’ preferences for and positive attitudes towards podcast and story map as scaffolds in developing EFL oral proficiency. The participants welcomed integration of the scaffolds into EFL speaking courses.
Westerheijden, Donald F.; Rosa, Maria Joao; Amaral, Alberto
2014-01-01
Two new, user-driven and web-based transparency tools for higher education are presented: U-Map, a classification of higher education institutions according to their actual activities, and U-Multirank, a multidimensional ranking of higher education institutions’ and study fields’ performances. The
Viewing or Visualising Which Concept Map Strategy Works Best on Problem-Solving Performance?
Lee, Youngmin; Nelson, David W.
2005-01-01
The purpose of this study was to investigate the effects of two types of maps (generative vs. completed) and the amount of prior knowledge (high vs. low) on well-structured and ill-structured problem-solving performance. Forty-four undergraduates who were registered in an introductory instructional technology course participated in the study.…
Tzeng, Jeng-Yi
2010-01-01
From the perspective of the Fuzzy Trace Theory, this study investigated the impacts of concept maps with two strategic orientations (comprehensive and thematic representations) on readers' performance of cognitive operations (such as perception, verbatim memory, gist reasoning and syntheses) while the readers were reading two history articles that…
Performance prediction of gas turbines by solving a system of non-linear equations
Energy Technology Data Exchange (ETDEWEB)
Kaikko, J
1998-09-01
This study presents a novel method for implementing the performance prediction of gas turbines from the component models. It is based on solving the non-linear set of equations that corresponds to the process equations, and the mass and energy balances for the engine. General models have been presented for determining the steady state operation of single components. Single and multiple shad arrangements have been examined with consideration also being given to heat regeneration and intercooling. Emphasis has been placed upon axial gas turbines of an industrial scale. Applying the models requires no information of the structural dimensions of the gas turbines. On comparison with the commonly applied component matching procedures, this method incorporates several advantages. The application of the models for providing results is facilitated as less attention needs to be paid to calculation sequences and routines. Solving the set of equations is based on zeroing co-ordinate functions that are directly derived from the modelling equations. Therefore, controlling the accuracy of the results is easy. This method gives more freedom for the selection of the modelling parameters since, unlike for the matching procedures, exchanging these criteria does not itself affect the algorithms. Implicit relationships between the variables are of no significance, thus increasing the freedom for the modelling equations as well. The mathematical models developed in this thesis will provide facilities to optimise the operation of any major gas turbine configuration with respect to the desired process parameters. The computational methods used in this study may also be adapted to any other modelling problems arising in industry. (orig.) 36 refs.
Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase I
National Aeronautics and Space Administration — We propose to develop novel FPGA-based algorithmic technology that will enable unprecedented computational power for the solution of large sparse linear equation...
What Happens Inside a Fuel Cell? Developing an Experimental Functional Map of Fuel Cell Performance
Brett, Daniel J. L.
2010-08-20
Fuel cell performance is determined by the complex interplay of mass transport, energy transfer and electrochemical processes. The convolution of these processes leads to spatial heterogeneity in the way that fuel cells perform, particularly due to reactant consumption, water management and the design of fluid-flow plates. It is therefore unlikely that any bulk measurement made on a fuel cell will accurately represent performance at all parts of the cell. The ability to make spatially resolved measurements in a fuel cell provides one of the most useful ways in which to monitor and optimise performance. This Minireview explores a range of in situ techniques being used to study fuel cells and describes the use of novel experimental techniques that the authors have used to develop an \\'experimental functional map\\' of fuel cell performance. These techniques include the mapping of current density, electrochemical impedance, electrolyte conductivity, contact resistance and CO poisoning distribution within working PEFCs, as well as mapping the flow of reactant in gas channels using laser Doppler anemometry (LDA). For the high-temperature solid oxide fuel cell (SOFC), temperature mapping, reference electrode placement and the use of Raman spectroscopy are described along with methods to map the microstructural features of electrodes. The combination of these techniques, applied across a range of fuel cell operating conditions, allows a unique picture of the internal workings of fuel cells to be obtained and have been used to validate both numerical and analytical models. © 2010 Wiley-VCH Verlag GmbH& Co. KGaA, Weinheim.
Preliminary Evaluation of MapReduce for High-Performance Climate Data Analysis
Duffy, Daniel Q.; Schnase, John L.; Thompson, John H.; Freeman, Shawn M.; Clune, Thomas L.
2012-01-01
MapReduce is an approach to high-performance analytics that may be useful to data intensive problems in climate research. It offers an analysis paradigm that uses clusters of computers and combines distributed storage of large data sets with parallel computation. We are particularly interested in the potential of MapReduce to speed up basic operations common to a wide range of analyses. In order to evaluate this potential, we are prototyping a series of canonical MapReduce operations over a test suite of observational and climate simulation datasets. Our initial focus has been on averaging operations over arbitrary spatial and temporal extents within Modern Era Retrospective- Analysis for Research and Applications (MERRA) data. Preliminary results suggest this approach can improve efficiencies within data intensive analytic workflows.
Directory of Open Access Journals (Sweden)
Hiroyuki Wakiwaka
2011-11-01
Full Text Available This paper discusses the effect of inductive coil shape on the sensing performance of a linear displacement sensor. The linear displacement sensor consists of a thin type inductive coil with a thin pattern guide, thus being suitable for tiny space applications. The position can be detected by measuring the inductance of the inductive coil. At each position due to the change in inductive coil area facing the pattern guide the value of inductance is different. Therefore, the objective of this research is to study various inductive coil pattern shapes and to propose the pattern that can achieve good sensing performance. Various shapes of meander, triangular type meander, square and circle shape with different turn number of inductive coils are examined in this study. The inductance is measured with the sensor sensitivity and linearity as a performance evaluation parameter of the sensor. In conclusion, each inductive coil shape has its own advantages and disadvantages. For instance, the circle shape inductive coil produces high sensitivity with a low linearity response. Meanwhile, the square shape inductive coil has a medium sensitivity with higher linearity.
A High Performance Silicon-on-Insulator LDMOSTT Using Linearly Increasing Thickness Techniques
International Nuclear Information System (INIS)
Yu-Feng, Guo; Zhi-Gong, Wang; Gene, Sheu; Jian-Bing, Cheng
2010-01-01
We present a new technique to achieve uniform lateral electric field and maximum breakdown voltage in lateral double-diffused metal-oxide-semiconductor transistors fabricated on silicon-on-insulator substrates. A linearly increasing drift-region thickness from the source to the drain is employed to improve the electric field distribution in the devices. Compared to the lateral linear doping technique and the reduced surface field technique, two-dimensional numerical simulations show that the new device exhibits reduced specific on-resistance, maximum off- and on-state breakdown voltages, superior quasi-saturation characteristics and improved safe operating area. (condensed matter: electronic structure, electrical, magnetic, and optical properties)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Svehla, Drazen; Rothacher, Markus; Hugentobler, Urs; Steigenberger, Peter; Ziebart, Marek
2014-05-01
Solar radiation pressure is the main source of errors in the precise orbit determination of GNSS satellites. All deficiencies in the modeling of Solar radiation pressure map into estimated terrestrial reference frame parameters as well as into derived gravity field coefficients and altimetry results when LEO orbits are determined using GPS. Here we introduce a new approach to geometrically map radial orbit perturbations of GNSS satellites using highly-performing clocks on board the first Galileo satellites. Only a linear model (time bias and time drift) needs to be removed from the estimated clock parameters and the remaining clock residuals map all radial orbit perturbations along the orbit. With the independent SLR measurements, we show that a Galileo clock is stable enough to map radial orbit perturbations continuously along the orbit with a negative sign in comparison to SLR residuals. Agreement between the SLR residuals and the clock residuals is at the 1 cm RMS for an orbit arc of 24 h. Looking at the clock parameters determined along one orbit revolution over a period of one year, we show that the so-called SLR bias in Galileo and GPS orbits can be explained by the translation of the determined orbit in the orbital plane towards the Sun. This orbit translation is due to thermal re-radiation and not accounting for the Sun elevation in the parameterization of the estimated Solar radiation pressure parameters. SLR ranging to GNSS satellites takes place typically at night, e.g. between 6 pm and 6 am local time when the Sun is in opposition to the satellite. Therefore, SLR observes only one part of the GNSS orbit with a negative radial orbit error that is mapped as an artificial bias in SLR observables. The Galileo clocks clearly show orbit translation for all Sun elevations: the radial orbit error is positive when the Sun is in conjuction (orbit noon) and negative when the Sun is in opposition (orbit midnight). The magnitude of this artificial negative SLR bias
B.A.J. van Tuijl; Piet Sonneveld; J. Campen; Gert-Jan Swinkels; H.J.J. Janssen; G.P.A Bot
2011-01-01
A new type of greenhouse with linear Fresnel lenses in the cover performing as a concentrated photovoltaic (CPV) system is presented. The CPV system retains all direct solar radiation, while diffuse solar radiation passes through and enters into the greenhouse cultivation system. The removal of all
Performance Comparison of Permanent Magnet Linear Actuators of Different Mover Types
DEFF Research Database (Denmark)
Ritchie, Ewen; Hinov, K.; Yatchev, I.
2006-01-01
A comparative study of permanent magnet linear actuators with different location of the permanent magnet is reported. Three mover types are considered - soft magnetic mover, permanent magnet mover and hybrid mover. Force-stroke characteristics are obtained with the help of finite element models...
Performance analysis of linear codes under maximum-likelihood decoding: a tutorial
National Research Council Canada - National Science Library
Sason, Igal; Shamai, Shlomo
2006-01-01
..., upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we ad...
High-performance small-scale solvers for linear Model Predictive Control
DEFF Research Database (Denmark)
Frison, Gianluca; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd
2014-01-01
, with the two main research areas of explicit MPC and tailored on-line MPC. State-of-the-art solvers in this second class can outperform optimized linear-algebra libraries (BLAS) only for very small problems, and do not explicitly exploit the hardware capabilities, relying on compilers for that. This approach...
International Nuclear Information System (INIS)
Erdelyi, B.; Bandura, L.; Nolen, J.
2009-01-01
A second order analytical and an arbitrary order numerical procedure is developed for the computation of transfer maps of energy degraders. The incorporation of the wedges into the optics of fragment separators for next-generation exotic beam facilities, their optical effects, and the optimization of their performance is studied in detail. It is shown how to place and shape the degraders in the system such that aberrations are minimized and resolving powers are maximized
CALiPER Report 21.2. Linear (T8) LED Lamp Performance in Five Types of Recessed Troffers
Energy Technology Data Exchange (ETDEWEB)
None
2014-05-01
Although lensed troffers are numerous, there are many other types of optical systems as well. This report looks at the performance of three linear (T8) LED lamps—chosen primarily based on their luminous intensity distributions (narrow, medium, and wide beam angles)—as well as a benchmark fluorescent lamp in five different troffer types. Also included are the results of a subjective evaluation. Results show that linear (T8) LED lamps can improve luminaire efficiency in K12-lensed and parabolic-louvered troffers, effect little change in volumetric and high-performance diffuse-lensed type luminaires, but reduce efficiency in recessed indirect troffers. These changes can be accompanied by visual appearance and visual comfort consequences, especially when LED lamps with clear lenses and narrow distributions are installed. Linear (T8) LED lamps with diffuse apertures exhibited wider beam angles, performed more similarly to fluorescent lamps, and received better ratings from observers. Guidance is provided on which luminaires are the best candidates for retrofitting with linear (T8) LED lamps.
Process mapping as a framework for performance improvement in emergency general surgery.
DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad
2018-02-01
Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.
Performance of engineering undergraduate students in mathematics: A case study in UniMAP
Saad, Syafawati Ab.; Azziz, Nor Hizamiyani Abdul; Zakaria, Siti Aisyah; Yazid, Nornadia Mohd
2015-12-01
The purpose of this paper is to study the trend performance of the first year engineering students at a public university in Mathematics course: Engineering Mathematics I. We analyze how ethnicity factor influenced students' performance in mathematics course over three years period. The performance of the undergraduate students in this study is measured by their cumulative grade point average (CGPA) in the first semester. Analysis of Variance (ANOVA) will be used to test the significance difference between three variables (Malay, Chinese and Indian). Method of simple linear regression (SLR) is used to test the relationship between the performances and to predict the future performance for this course. The findings of the study show that Chinese students perform better than Malay and Indian students.
A novel multispectral glacier mapping method and its performance in Greenland
Citterio, M.; Fausto, R. S.; Ahlstrom, A. P.; Andersen, S. B.
2014-12-01
Multispectral land surface classification methods are widely used for mapping glacier outlines. Significant post-classification manual editing is typically required, and mapping glacier outlines over larger regions remains a rather labour intensive task. In this contribution we introduce a novel method for mapping glacier outlines from multispectral satellite imagery, requiring only minor manual editing.Over the last decade GLIMS (Global Land Ice Measurements from Space) improved the availability of glacier outlines, and in 2012 the Randolph Glacier Inventory (RGI) attained global coverage by compiling existing and new data sources in the wake of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5). With the launch of Landsat 8 in 2013 and the upcoming ESA (European Space Agency) Sentinel 2 missions, the availability of multispectral imagery may grow faster than our ability to process it into timely and reliable glacier outline products. Improved automatic classification methods would enable a full exploitation of these new data sources.We outline the theoretical basis of the proposed classification algorithm, provide a step by step walk-through from raw imagery to finished ice cover grids and vector glacier outlines, and evaluate the performance of the new method in mapping the outlines of glaciers, ice caps and the Greenland Ice Sheet from Landsat 8 OLI imagery. The classification output is compared against manually digitized ice margin positions, the RGI vectors, and the PROMICE (Programme for Monitoring of the Greenland Ice Sheet) aerophotogrammetric map of Greenland ice masses over a sector of the Disko Island surge cluster in West Greenland, the Qassimiut ice sheet lobe in South Greenland, and the A.P. Olsen ice cap in NE Greenland.
Performance improvement of shunt active power filter based on non-linear least-square approach
DEFF Research Database (Denmark)
Terriche, Yacine
2018-01-01
. This paper proposes an improved open loop strategy which is unconditionally stable and flexible. The proposed method which is based on non-linear least square (NLS) approach can extract the fundamental voltage and estimates its phase within only half cycle, even in the presence of odd harmonics and dc offset......). The synchronous reference frame (SRF) approach is widely used for generating the RCC due to its simplicity and computation efficiency. However, the SRF approach needs precise information of the voltage phase which becomes a challenge under adverse grid conditions. A typical solution to answer this need...
Design and performance of a 3.3-MeV linear induction accelerator (LIA)
International Nuclear Information System (INIS)
Cheng Nianan; Zhang Shouyun; Tao Zucong
1992-01-01
A 3.3-MeV linear induction accelerator (LIA) has been designed and constructed at the China Academy of Engineering Physics. The parameters of 3.4 MeV, 2 kA, 80 ns and 1 x 10 4 A/(rad.cm) 2 have been achieved. It has been used for SG-1 FEL experiments. The accelerator is mounted on a movable frame so that , after moving 3 m transversely, it can be assembled with more modules into a 10-MeV LIA. The authors summarize the physics and engineering aspects of the LIA facility and describe the measuring means of characters for the beam
International Nuclear Information System (INIS)
Rozite, L; Joffe, R; Varna, J; Nyström, B
2012-01-01
The behaviour of highly non-linear cellulosic fibers and their composite is characterized. Micro-mechanisms occurring in these materials are identified. Mechanical properties of regenerated cellulose fibers and composites are obtained using simple tensile test. Material visco-plastic and visco-elastic properties are analyzed using creep tests. Two bio-based resins are used in this study – Tribest and EpoBioX. The glass and flax fiber composites are used as reference materials to compare with Cordenka fiber laminates.
Rozite, L.; Joffe, R.; Varna, J.; Nyström, B.
2012-02-01
The behaviour of highly non-linear cellulosic fibers and their composite is characterized. Micro-mechanisms occurring in these materials are identified. Mechanical properties of regenerated cellulose fibers and composites are obtained using simple tensile test. Material visco-plastic and visco-elastic properties are analyzed using creep tests. Two bio-based resins are used in this study - Tribest and EpoBioX. The glass and flax fiber composites are used as reference materials to compare with Cordenka fiber laminates.
Directory of Open Access Journals (Sweden)
Kamarulzaman Kamarudin
2014-12-01
Full Text Available This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM techniques (i.e., Gmapping and Hector SLAM using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS. The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect’s depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.
Macnamara, Brooke N; Frank, David J
2018-05-01
For well over a century, scientists have investigated individual differences in performance. The majority of studies have focused on either differences in practice, or differences in cognitive resources. However, the predictive ability of either practice or cognitive resources varies considerably across tasks. We are the first to examine task characteristics' impact on learning and performance in a complex task while controlling for other task characteristics. In 2 experiments we test key theoretical task characteristic thought to moderate the relationship between practice, cognitive resources, and performance. We devised a task where each of several key task characteristics can be manipulated independently. Participants played 5 rounds of a game similar to the popular tower defense videogame Plants vs. Zombies where both cognitive load and game characteristics were manipulated. In Experiment 1, participants either played a consistently mapped version-the stimuli and the associated meaning of their properties were constant across the 5 rounds-or played a variably mapped version-the stimuli and the associated meaning of their properties changed every few minutes. In Experiment 2, participants either played a static version-that is, turn taking with no time pressure-or played a dynamic version-that is, the stimuli moved regardless of participants' response rates. In Experiment 1, participants' accuracy and efficiency were substantially hindered in the variably mapped conditions. In Experiment 2, learning and performance accuracy were hindered in the dynamic conditions, especially when under cognitive load. Our results suggest that task characteristics impact the relative importance of cognitive resources and practice on predicting learning and performance. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Performance of T2 Maps in the Detection of Prostate Cancer.
Chatterjee, Aritrick; Devaraj, Ajit; Mathew, Melvy; Szasz, Teodora; Antic, Tatjana; Karczmar, Gregory S; Oto, Aytekin
2018-05-03
This study compares the performance of T2 maps in the detection of prostate cancer (PCa) in comparison to T2-weighted (T2W) magnetic resonance images. The prospective study was institutional review board approved. Consenting patients (n = 45) with histologic confirmed PCa underwent preoperative 3-T magnetic resonance imaging with or without endorectal coil. Two radiologists, working independently, marked regions of interests (ROIs) on PCa lesions separately on T2W images and T2 maps. Each ROI was assigned a score of 1-5 based on the confidence in accurately detecting cancer, with 5 being the highest confidence. Subsequently, the histologically confirmed PCa lesions (n = 112) on whole-mount sections were matched with ROIs to calculate sensitivity, positive predictive value (PPV), and radiologist confidence score. Quantitative T2 values of PCa and benign tissue ROIs were measured. Sensitivity and confidence score for PCa detection were similar for T2W images (51%, 4.5 ± 0.8) and T2 maps (52%, 4.5 ± 0.6). However, PPV was significantly higher (P = .001) for T2 maps (88%) compared to T2W (72%) images. The use of endorectal coils nominally improved sensitivity (T2W: 55 vs 47%, T2 map: 54% vs 48%) compared to the use of no endorectal coils, but not the PPV and the confidence score. Quantitative T2 values for PCa (105 ± 28 milliseconds) were significantly (P = 9.3 × 10 -14 ) lower than benign peripheral zone tissue (211 ± 71 milliseconds), with moderate significant correlation with Gleason score (ρ = -0.284). Our study shows that review of T2 maps by radiologists has similar sensitivity but higher PPV compared to T2W images. Additional quantitative information obtained from T2 maps is helpful in differentiating cancer from normal prostate tissue and determining its aggressiveness. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Electromagnetic Performance Calculation of HTS Linear Induction Motor for Rail Systems
International Nuclear Information System (INIS)
Liu, Bin; Fang, Jin; Cao, Junci; Chen, Jie; Shu, Hang; Sheng, Long
2017-01-01
According to a high temperature superconducting (HTS) linear induction motor (LIM) designed for rail systems, the influence of electromagnetic parameters and mechanical structure parameters on the electromagnetic horizontal thrust, vertical force of HTS LIM and the maximum vertical magnetic field of HTS windings are analyzed. Through the research on the vertical field of HTS windings, the development regularity of the HTS LIM maximum input current with different stator frequency and different thickness value of the secondary conductive plate is obtained. The theoretical results are of great significance to analyze the stability of HTS LIM. Finally, based on theory analysis, HTS LIM test platform was built and the experiment was carried out with load. The experimental results show that the theoretical analysis is correct and reasonable. (paper)
Electromagnetic Performance Calculation of HTS Linear Induction Motor for Rail Systems
Liu, Bin; Fang, Jin; Cao, Junci; Chen, Jie; Shu, Hang; Sheng, Long
2017-07-01
According to a high temperature superconducting (HTS) linear induction motor (LIM) designed for rail systems, the influence of electromagnetic parameters and mechanical structure parameters on the electromagnetic horizontal thrust, vertical force of HTS LIM and the maximum vertical magnetic field of HTS windings are analyzed. Through the research on the vertical field of HTS windings, the development regularity of the HTS LIM maximum input current with different stator frequency and different thickness value of the secondary conductive plate is obtained. The theoretical results are of great significance to analyze the stability of HTS LIM. Finally, based on theory analysis, HTS LIM test platform was built and the experiment was carried out with load. The experimental results show that the theoretical analysis is correct and reasonable.
Impact of detector solenoid on the Compact Linear Collider luminosity performance
Inntjore Levinsen, Y.; Tomás, Rogelio; Schulte, Daniel
2014-05-27
In order to obtain the necessary luminosity with a reasonable amount of beam power, the Compact Linear Collider (CLIC) design includes an unprecedented collision beam size of {\\sigma} = 1 nm vertically and {\\sigma} = 45 nm horizontally. Given the small and very flat beams, the luminosity can be significantly degraded from the impact of the experimental solenoid field in combination with a large crossing angle. Main effects include y-x'-coupling and increase of vertical dispersion. Additionally, Incoherent Synchrotron Radiation (ISR) from the orbit deflection created by the solenoid field, increases the beam emittance. A detailed study of the impact from a realistic solenoid field and the associated correction techniques for the CLIC Final Focus is presented. In particular, the impact of techniques to compensate the beam optics distortions due to the detector solenoid main field and its overlap with the final focus magnets are shown. The unrecoverable luminosity loss due to ISR has been evaluated, and found to...
Improving total-building seismic performance using linear fluid viscous dampers
Del Gobbo, GM; Blakeborough, A; Williams, MS
2018-01-01
Previous research has revealed that Eurocode-compliant structures can experience structural and nonstructural damage during earthquakes. Retrofitting buildings with fluid viscous dampers (FVDs) can improve interstorey drifts and floor accelerations, two structural parameters that characterize seismic demand. Previous research focusing on FVD applications for improving seismic performance has focused on structural performance. Structural parameters such as interstorey drifts and floor accelera...
On the significance of the noise model for the performance of a linear MPC in closed-loop operation
DEFF Research Database (Denmark)
Hagdrup, Morten; Boiroux, Dimitri; Mahmoudi, Zeinab
2016-01-01
This paper discusses the significance of the noise model for the performance of a Model Predictive Controller when operating in closed-loop. The process model is parametrized as a continuous-time (CT) model and the relevant sampled-data filtering and control algorithms are developed. Using CT...... models typically means less parameters to identify. Systematic tuning of such controllers is discussed. Simulation studies are conducted for linear time-invariant systems showing that choosing a noise model of low order is beneficial for closed-loop performance. (C) 2016, IFAC (International Federation...
Mist'e, Gianluigi Alberto; Benini, Ernesto
2012-06-01
Compressor map interpolation is usually performed through the introduction of auxiliary coordinates (β). In this paper, a new analytical bivariate β function definition to be used in compressor map interpolation is studied. The function has user-defined parameters that must be adjusted to properly fit to a single map. The analytical nature of β allows for rapid calculations of the interpolation error estimation, which can be used as a quantitative measure of interpolation accuracy and also as a valid tool to compare traditional β function interpolation with new approaches (artificial neural networks, genetic algorithms, etc.). The quality of the method is analyzed by comparing the error output to the one of a well-known state-of-the-art methodology. This comparison is carried out for two different types of compressor and, in both cases, the error output using the method presented in this paper is found to be consistently lower. Moreover, an optimization routine able to locally minimize the interpolation error by shape variation of the β function is implemented. Further optimization introducing other important criteria is discussed.
Directory of Open Access Journals (Sweden)
Goutam Sahana
Full Text Available INTRODUCTION: The state-of-the-art for dealing with multiple levels of relationship among the samples in genome-wide association studies (GWAS is unified mixed model analysis (MMA. This approach is very flexible, can be applied to both family-based and population-based samples, and can be extended to incorporate other effects in a straightforward and rigorous fashion. Here, we present a complementary approach, called 'GENMIX (genealogy based mixed model' which combines advantages from two powerful GWAS methods: genealogy-based haplotype grouping and MMA. SUBJECTS AND METHODS: We validated GENMIX using genotyping data of Danish Jersey cattle and simulated phenotype and compared to the MMA. We simulated scenarios for three levels of heritability (0.21, 0.34, and 0.64, seven levels of MAF (0.05, 0.10, 0.15, 0.20, 0.25, 0.35, and 0.45 and five levels of QTL effect (0.1, 0.2, 0.5, 0.7 and 1.0 in phenotypic standard deviation unit. Each of these 105 possible combinations (3 h(2 x 7 MAF x 5 effects of scenarios was replicated 25 times. RESULTS: GENMIX provides a better ranking of markers close to the causative locus' location. GENMIX outperformed MMA when the QTL effect was small and the MAF at the QTL was low. In scenarios where MAF was high or the QTL affecting the trait had a large effect both GENMIX and MMA performed similarly. CONCLUSION: In discovery studies, where high-ranking markers are identified and later examined in validation studies, we therefore expect GENMIX to enrich candidates brought to follow-up studies with true positives over false positives more than the MMA would.
a Performance Comparison of Feature Detectors for Planetary Rover Mapping and Localization
Wan, W.; Peng, M.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Teng, B.; Mao, X.; Zhao, Q.; Xin, X.; Jia, M.
2017-07-01
Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.
Varjacic, Andreja; Mantini, Dante; Demeyere, Nele; Gillebert, Celine R
2018-03-27
The Trail Making Test (TMT) is an extensively used neuropsychological instrument for the assessment of set-switching ability across a wide range of neurological conditions. However, the exact nature of the cognitive processes and associated brain regions contributing to the performance on the TMT remains unclear. In this review, we first introduce the TMT by discussing its administration and scoring approaches. We then examine converging evidence and divergent findings concerning the brain regions related to TMT performance, as identified by lesion-symptom mapping studies conducted in brain-injured patients and functional magnetic resonance imaging studies conducted in healthy participants. After addressing factors that may account for the heterogeneity in the brain regions reported by these studies, we identify future research endeavours that may permit disentangling the different processes contributing to TMT performance and relating them to specific brain circuits. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
A PERFORMANCE COMPARISON OF FEATURE DETECTORS FOR PLANETARY ROVER MAPPING AND LOCALIZATION
Directory of Open Access Journals (Sweden)
W. Wan
2017-07-01
Full Text Available Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.
Smith, Timothy D.; Steffen, Christopher J., Jr.; Yungster, Shaye; Keller, Dennis J.
1998-01-01
The all rocket mode of operation is shown to be a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. An axisymmetric RBCC engine was used to determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and multiple linear regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inlet diameter ratio. A perfect gas computational fluid dynamics analysis, using both the Spalart-Allmaras and k-omega turbulence models, was performed with the NPARC code to obtain values of vacuum specific impulse. Results from the multiple linear regression analysis showed that for both the full flow and gas generator configurations increasing mixer-ejector area ratio and rocket area ratio increase performance, while increasing mixer-ejector inlet area ratio and mixer-ejector length-to-diameter ratio decrease performance. Increasing injected secondary flow increased performance for the gas generator analysis, but was not statistically significant for the full flow analysis. Chamber pressure was found to be not statistically significant.
PERFORMANCE DE FERTILIZANTES FOLIARES E CORRELAÇÕES LINEARES EM COMPONENTES DO RENDIMENTO DA SOJA
Directory of Open Access Journals (Sweden)
Vinícius Jardel Szareski
2017-01-01
Full Text Available O objetivo foi avaliar a resposta de diferentes fertilizantes foliares aplicados na cultura da soja e as associações lineares entre os componentes do rendimento de grãos, nas condições edafoclimáticas da Região do Alto Uruguai, RS. O experimento foi conduzido em delineamento de blocos casualizados, com três repetições. Os tratamentos testados foram: T1: sem aplicação de bioestimulantes; T2: aplicação de NITAMIN®; T3: aplicação de BIOZIME®; T4: aplicação de Bioamino Extra®; T5: Aplicação de NIPHOKAN®, onde avaliou-se os componentes do rendimento de grãos da soja. A aplicação de micronutrientes e bioestimulantes via foliar não acarreta em aumento no rendimento de grãos da soja, para as condições edafoclimáticas da Região do Alto Uruguai. O rendimento de grãos apresenta correlação positiva com o número de ramificações, número de legumes nas ramificações, número total de legumes, número de grãos por planta e massa de mil grãos.
Assessing Performance of Multipurpose Reservoir System Using Two-Point Linear Hedging Rule
Sasireka, K.; Neelakantan, T. R.
2017-07-01
Reservoir operation is the one of the important filed of water resource management. Innovative techniques in water resource management are focussed at optimizing the available water and in decreasing the environmental impact of water utilization on the natural environment. In the operation of multi reservoir system, efficient regulation of the release to satisfy the demand for various purpose like domestic, irrigation and hydropower can lead to increase the benefit from the reservoir as well as significantly reduces the damage due to floods. Hedging rule is one of the emerging techniques in reservoir operation, which reduce the severity of drought by accepting number of smaller shortages. The key objective of this paper is to maximize the minimum power production and improve the reliability of water supply for municipal and irrigation purpose by using hedging rule. In this paper, Type II two-point linear hedging rule is attempted to improve the operation of Bargi reservoir in the Narmada basin in India. The results obtained from simulation of hedging rule is compared with results from Standard Operating Policy, the result shows that the application of hedging rule significantly improved the reliability of water supply and reliability of irrigation release and firm power production.
Performance improvement of shunt active power filter based on non-linear least-square approach
DEFF Research Database (Denmark)
Terriche, Yacine
2018-01-01
Nowadays, the shunt active power filters (SAPFs) have become a popular solution for power quality issues. A crucial issue in controlling the SAPFs which is highly correlated with their accuracy, flexibility and dynamic behavior, is generating the reference compensating current (RCC). The synchron......Nowadays, the shunt active power filters (SAPFs) have become a popular solution for power quality issues. A crucial issue in controlling the SAPFs which is highly correlated with their accuracy, flexibility and dynamic behavior, is generating the reference compensating current (RCC......). The synchronous reference frame (SRF) approach is widely used for generating the RCC due to its simplicity and computation efficiency. However, the SRF approach needs precise information of the voltage phase which becomes a challenge under adverse grid conditions. A typical solution to answer this need....... This paper proposes an improved open loop strategy which is unconditionally stable and flexible. The proposed method which is based on non-linear least square (NLS) approach can extract the fundamental voltage and estimates its phase within only half cycle, even in the presence of odd harmonics and dc offset...
Jakubowski, J.; Stypulkowski, J. B.; Bernardeau, F. G.
2017-12-01
The first phase of the Abu Hamour drainage and storm tunnel was completed in early 2017. The 9.5 km long, 3.7 m diameter tunnel was excavated with two Earth Pressure Balance (EPB) Tunnel Boring Machines from Herrenknecht. TBM operation processes were monitored and recorded by Data Acquisition and Evaluation System. The authors coupled collected TBM drive data with available information on rock mass properties, cleansed, completed with secondary variables and aggregated by weeks and shifts. Correlations and descriptive statistics charts were examined. Multivariate Linear Regression and CART regression tree models linking TBM penetration rate (PR), penetration per revolution (PPR) and field penetration index (FPI) with TBM operational and geotechnical characteristics were performed for the conditions of the weak/soft rock of Doha. Both regression methods are interpretable and the data were screened with different computational approaches allowing enriched insight. The primary goal of the analysis was to investigate empirical relations between multiple explanatory and responding variables, to search for best subsets of explanatory variables and to evaluate the strength of linear and non-linear relations. For each of the penetration indices, a predictive model coupling both regression methods was built and validated. The resultant models appeared to be stronger than constituent ones and indicated an opportunity for more accurate and robust TBM performance predictions.
Directory of Open Access Journals (Sweden)
Lim Meng-Hui
2011-01-01
Full Text Available Abstract Biometric discretization extracts a binary string from a set of real-valued features per user. This representative string can be used as a cryptographic key in many security applications upon error correction. Discretization performance should not degrade from the actual continuous features-based classification performance significantly. However, numerous discretization approaches based on ineffective encoding schemes have been put forward. Therefore, the correlation between such discretization and classification has never been made clear. In this article, we aim to bridge the gap between continuous and Hamming domains, and provide a revelation upon how discretization based on equal-width quantization and linearly separable subcode encoding could affect the classification performance in the Hamming domain. We further illustrate how such discretization can be applied in order to obtain a highly resembled classification performance under the general Lp distance and the inner product metrics. Finally, empirical studies conducted on two benchmark face datasets vindicate our analysis results.
A Family of High-Performance Solvers for Linear Model Predictive Control
DEFF Research Database (Denmark)
Frison, Gianluca; Sokoler, Leo Emil; Jørgensen, John Bagterp
2014-01-01
In Model Predictive Control (MPC), an optimization problem has to be solved at each sampling time, and this has traditionally limited the use of MPC to systems with slow dynamic. In this paper, we propose an e_cient solution strategy for the unconstrained sub-problems that give the search......-direction in Interior-Point (IP) methods for MPC, and that usually are the computational bottle-neck. This strategy combines a Riccati-like solver with the use of high-performance computing techniques: in particular, in this paper we explore the performance boost given by the use of single precision computation...
Eames, P. C.; Norton, B.
A numerical simulation model was employed to investigate the effects of ambient temperature and insolation on the efficiency of compound parabolic concentrating solar energy collectors. The limitations of presently-used collector performance characterization curves were investigated and a new approach proposed.
Ende, D.A. van den; Bos, B.; Groen, W.A.
2009-01-01
Piezoelectric bimorph bender actuators find application number of areas, ranging from automotive to health care. High voltage operation in harsh environments poses ever more stringent demands on functionality and lifetime. In these high performance benders, the trade-off between functionality and
The impact of BeamCal performance at different international linear ...
Indian Academy of Sciences (India)
Abstract. The ILC accelerator parameters and detector concepts are still under discus- sion in the world-wide community. As will be shown, the performance of the BeamCal, the calorimeter in the very forward area of the ILC detector, is very sensitive to the beam parameter and crossing angle choices. We propose here ...
The impact of BeamCal performance at different international linear ...
Indian Academy of Sciences (India)
The ILC accelerator parameters and detector concepts are still under discussion in the world-wide community. As will be shown, the performance of the BeamCal, the calorimeter in the very forward area of the ILC detector, is very sensitive to the beam parameter and crossing angle choices. We propose here BeamCal ...
Performance Comparison of OpenMP, MPI, and MapReduce in Practical Problems
Directory of Open Access Journals (Sweden)
Sol Ji Kang
2015-01-01
Full Text Available With problem size and complexity increasing, several parallel and distributed programming models and frameworks have been developed to efficiently handle such problems. This paper briefly reviews the parallel computing models and describes three widely recognized parallel programming frameworks: OpenMP, MPI, and MapReduce. OpenMP is the de facto standard for parallel programming on shared memory systems. MPI is the de facto industry standard for distributed memory systems. MapReduce framework has become the de facto standard for large scale data-intensive applications. Qualitative pros and cons of each framework are known, but quantitative performance indexes help get a good picture of which framework to use for the applications. As benchmark problems to compare those frameworks, two problems are chosen: all-pairs-shortest-path problem and data join problem. This paper presents the parallel programs for the problems implemented on the three frameworks, respectively. It shows the experiment results on a cluster of computers. It also discusses which is the right tool for the jobs by analyzing the characteristics and performance of the paradigms.
THE PERFORMANCE ANALYSIS OF A UAV BASED MOBILE MAPPING SYSTEM PLATFORM
Directory of Open Access Journals (Sweden)
M. L. Tsai
2013-08-01
Full Text Available To facilitate applications such as environment detection or disaster monitoring, the development of rapid low cost systems for collecting near real-time spatial information is very critical. Rapid spatial information collection has become an emerging trend for remote sensing and mapping applications. This study develops a Direct Georeferencing (DG based fixed-wing Unmanned Aerial Vehicle (UAV photogrammetric platform where an Inertial Navigation System (INS/Global Positioning System (GPS integrated Positioning and Orientation System (POS system is implemented to provide the DG capability of the platform. The performance verification indicates that the proposed platform can capture aerial images successfully. A flight test is performed to verify the positioning accuracy in DG mode without using Ground Control Points (GCP. The preliminary results illustrate that horizontal DG positioning accuracies in the x and y axes are around 5 m with 300 m flight height. The positioning accuracy in the z axis is less than 10 m. Such accuracy is good for near real-time disaster relief. The DG ready function of proposed platform guarantees mapping and positioning capability even in GCP free environments, which is very important for rapid urgent response for disaster relief. Generally speaking, the data processing time for the DG module, including POS solution generalization, interpolation, Exterior Orientation Parameters (EOP generation, and feature point measurements, is less than one hour.
The Performance Analysis of a Uav Based Mobile Mapping System Platform
Tsai, M. L.; Chiang, K. W.; Lo, C. F.; Ch, C. H.
2013-08-01
To facilitate applications such as environment detection or disaster monitoring, the development of rapid low cost systems for collecting near real-time spatial information is very critical. Rapid spatial information collection has become an emerging trend for remote sensing and mapping applications. This study develops a Direct Georeferencing (DG) based fixed-wing Unmanned Aerial Vehicle (UAV) photogrammetric platform where an Inertial Navigation System (INS)/Global Positioning System (GPS) integrated Positioning and Orientation System (POS) system is implemented to provide the DG capability of the platform. The performance verification indicates that the proposed platform can capture aerial images successfully. A flight test is performed to verify the positioning accuracy in DG mode without using Ground Control Points (GCP). The preliminary results illustrate that horizontal DG positioning accuracies in the x and y axes are around 5 m with 300 m flight height. The positioning accuracy in the z axis is less than 10 m. Such accuracy is good for near real-time disaster relief. The DG ready function of proposed platform guarantees mapping and positioning capability even in GCP free environments, which is very important for rapid urgent response for disaster relief. Generally speaking, the data processing time for the DG module, including POS solution generalization, interpolation, Exterior Orientation Parameters (EOP) generation, and feature point measurements, is less than one hour.
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
Li, Zhe; Yang, Guang-Hong
2017-09-01
In this paper, an integrated data-driven fault-tolerant control (FTC) design scheme is proposed under the configuration of the Youla parameterization for multiple-input multiple-output (MIMO) systems. With unknown system model parameters, the canonical form identification technique is first applied to design the residual observer in fault-free case. In faulty case, with online tuning of the Youla parameters based on the system data via the gradient-based algorithm, the fault influence is attenuated with system performance optimization. In addition, to improve the robustness of the residual generator to a class of system deviations, a novel adaptive scheme is proposed for the residual generator to prevent its over-activation. Simulation results of a two-tank flow system demonstrate the optimized performance and effect of the proposed FTC scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Vincenzo Capizzi
2013-05-01
Full Text Available This paper is aimed at identifying and analyzing the contribution of the major drivers of the performance of informal venture capitalists’ investments. This study analyzes data on Italian transactions and personal features of Italian Business Angels gathered during 2007 – 2011 with the support of IBAN (Italian Business Angels Network. The econometric analysis investigates the returns of business angels’ investments and their major determinants (industry, exit strategy, experience, holding period, rejection rate, and year of divestiture. The major results are the followings: 1 differently from previous literature, the relationship between Experience and IRR is quadratic and significant; 2 for the first time, is confirmed by quantitative data that short Holding period (below 3 years earn a lower IRR; 3 the Rejection rate is logarithmic and the impact on IRR is positive and significant. Finally, the outcomes of the empirical analysis performed in this study allow identifying new and concrete insights on possible policy interventions.
Firm Size as Moderator to Non-Linear Leverage-Performance Relation: An Emerging Market Review
Directory of Open Access Journals (Sweden)
Umar Farooq
2017-08-01
such losses are more prominent for small size firms. Results also show that the leverage-performance relation is nonlinear for medium and large size firms. However, these firms are not targeting optimal level and overleveraging that ultimately decrease their profits. So, financial managers of small size firms should avoid debt financing while for large and medium size firms, managers need to adjust their debt ratio to its optimal level.
Linear optics and quantum maps
International Nuclear Information System (INIS)
Aiello, A.; Puentes, G.; Woerdman, J. P.
2007-01-01
We present a theoretical analysis of the connection between classical polarization optics and quantum mechanics of two-level systems. First, we review the matrix formalism of classical polarization optics from a quantum information perspective. In this manner the passage from the Stokes-Jones-Mueller description of classical optical processes to the representation of one- and two-qubit quantum operations, becomes straightforward. Second, as a practical application of our classical-vs-quantum formalism, we show how two-qubit maximally entangled mixed states can be generated by using polarization and spatial modes of photons generated via spontaneous parametric down conversion
Stirling Convertor Performance Mapping Test Results for Future Radioisotope Power Systems
Qiu, Songgang; Peterson, Allen A.; Faultersack, Franklyn D.; Redinger, Darin L.; Augenblick, John E.
2004-02-01
Long-life radioisotope-fueled generators based on free-piston Stirling convertors are an energy-conversion solution for future space applications. The high efficiency of Stirling machines makes them more attractive than the thermoelectric generators currently used in space. Stirling Technology Company (STC) has been performance-testing its Stirling generators to provide data for potential system integration contractors. This paper describes the most recent test results from the STC RemoteGen™ 55 W-class Stirling generators (RG-55). Comparisons are made between the new data and previous Stirling thermodynamic simulation models. Performance-mapping tests are presented including variations in: internal charge pressure, cold end temperature, hot end temperature, alternator temperature, input power, and variation of control voltage.
Institute of Scientific and Technical Information of China (English)
YUAN Dongfeng; WANG Chengxiang; YAO Qi; CAO Zhigang
2001-01-01
Based on "capacity rule", the perfor-mance of multilevel coding (MLC) schemes with dif-ferent set partitioning strategies and decoding meth-ods in AWGN and Rayleigh fading channels is investi-gated, in which BCH codes are chosen as componentcodes and 8ASK modulation is used. Numerical re-sults indicate that MLC scheme with UP strategy canobtain optimal performance in AWGN channels andBP is the best mapping strategy for Rayleigh fadingchannels. BP strategy is of good robustness in bothkinds of channels to realize an optimum MLC system.Multistage decoding (MSD) is a sub-optimal decodingmethod of MLC for both channels. For Ungerboeckpartitioning (UP) and mixed partitioning (MP) strat-egy, MSD is strongly recommended to use for MLCsystem, while for BP strategy, PDL is suggested to useas a simple decoding method compared with MSD.
Abramowicz, H.; Afanaciev, K.; Aguilar, J.; Alvarez, E.; Avila, D.; Benhammou, Y.; Bortko, L.; Borysov, O.; Bergholz, M.; Bozovic-Jelisavcic, I.; Castro, E.; Chelkov, G.; Coca, C.; Daniluk, W.; Dumitru, L.; Elsener, K.; Fadeyev, V.; Firlej, M.; Firu, E.; Fiutowski, T.; Ghenescu, V.; Gostkin, M.; Henschel, H.; Idzik, M.; Ishikawa, A.; Kananov, S.; Kollowa, S.; Kotov, S.; Kotula, J.; Kozhevnikov, D.; Kruchonok, V.; Krupa, B.; Kulis, Sz.; Lange, W.; Lesiak, T.; Levy, A.; Levy, I.; Lohmann, W.; Lukic, S.; Milke, C.; Moron, J.; Moszczynski, A.; Neagu, A.T.; Novgorodova, O.; Oliwa, K.; Orlandea, M.; Pandurovic, M.; Pawlik, B.; Preda, T.; Przyborowski, D.; Rosenblat, O.; Sailer, A.; Sato, Y.; Schumm, B.; Schuwalow, S.; Smiljanic, I.; Smolyanskiy, P.; Swientek, K.; Teodorescu, E.; Terlecki, P.; Wierba, W.; Wojton, T.; Yamaguchi, S.; Yamamoto, H.; Zawiejski, L.; Zgura, I.S.; Zhemchugov, A.
2015-01-01
Detector-plane prototypes of the very forward calorimetry of a future detector at an $e^+e^-$ collider have been built and their performance was measured in an electron beam. The detector plane comprises silicon or GaAs pad sensors, dedicated front-end and ADC ASICs, and an FPGA for data concentration. Measurements of the signal-to-noise ratio for different feedback schemes and the response as a function of the position of the sensor are presented. A deconvolution method is successfully applied, and a comparison of the measured shower shape as a function of the absorber depth with a Monte-Carlo simulation is given.
Pan, Minqiang; Zhong, Yujian
2018-01-01
Porous structure can effectively enhance the heat transfer efficiency. A kind of micro vaporizer using the oriented linear cutting copper fiber sintered felt is proposed in this work. Multiple long cutting copper fibers are firstly fabricated with a multi-tooth tool and then sintered together in parallel to form uniform thickness metal fiber sintered felts that provided a characteristic of oriented microchannels. The temperature rise response and thermal conversion efficiency are experimentally investigated to evaluate the influences of porosity, surface structure, feed flow rate and input power on the evaporation characteristics. It is indicated that the temperature rise response of water is mainly affected by input power and feed flow rate. High input power and low feed flow rate present better temperature rise response of water. Porosity rather than surface structure plays an important role in the temperature rise response of water at a relatively high input power. The thermal conversion efficiency is dominated by the input power and surface structure. The oriented linear cutting copper fiber sintered felts for three kinds of porosities show better thermal conversion efficiency than that of the oriented linear copper wire sintered felt when the input power is less than 115 W. All the sintered felts have almost the same performance of thermal conversion at a high input power.
Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O
2016-06-01
The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.
International Nuclear Information System (INIS)
Jornet, N.; Ribas, M.; Eudaldo, T.; Carrasco, P.
2001-01-01
In vivo dosimetry by means of diode detectors has been used routinely in our hospital since 1996 to guarantee the dose administrated to patients undergoing a radiotherapy treatment. The aim of this work is to present how in vivo dosimetry was implemented in our centre and which kind of errors have been discovered and corrected. Before the implementation it has to be clear which kind of errors want to be traced, the tolerance and action level, who will perform the measurements and who will evaluate them. Once all these things are clear, the first thing to do is to choose the more appropriate type of diodes and to calibrate them. The lower the tolerance level, the more accurate the calibration has to be. At this point the training and motivation of people who will be involved is very important to succeed in implementing routine use of in vivo dosimetry. Choosing one treatment unit and one easy and frequent treatment technique is a good way of starting implementation. We started with prostate treatments. In vivo entrance and exit doses were measured and dose to the ICRU point was calculated. Nowadays in vivo dosimetry is performed in the second session of all treatments (X-rays and electrons). (author)
de Bruin, Anique B H; Smits, Niels; Rikers, Remy M J P; Schmidt, Henk G
2008-11-01
In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training, were analysed since they had started playing chess seriously. The results revealed that deliberate practice (i.e. serious chess study alone and serious chess play) strongly contributed to chess performance. The influence of deliberate practice was not only observable in current performance, but also over chess players' careers. Moreover, although the drop-outs' chess ratings developed more slowly over time, both the persistent and drop-out chess players benefited to the same extent from investments in deliberate practice. Finally, the effect of gender on chess performance proved to be much smaller than the effect of deliberate practice. This study provides longitudinal support for the monotonic benefits assumption of deliberate practice, by showing that over chess players' careers, deliberate practice has a significant effect on performance, and to the same extent for chess players of different ultimate performance levels. The results of this study are not in line with critique raised against the deliberate practice theory that the factors deliberate practice and talent could be confounded.
He, Xin; Frey, Eric C.
2007-03-01
Binary ROC analysis has solid decision-theoretic foundations and a close relationship to linear discriminant analysis (LDA). In particular, for the case of Gaussian equal covariance input data, the area under the ROC curve (AUC) value has a direct relationship to the Hotelling trace. Many attempts have been made to extend binary classification methods to multi-class. For example, Fukunaga extended binary LDA to obtain multi-class LDA, which uses the multi-class Hotelling trace as a figure-of-merit, and we have previously developed a three-class ROC analysis method. This work explores the relationship between conventional multi-class LDA and three-class ROC analysis. First, we developed a linear observer, the three-class Hotelling observer (3-HO). For Gaussian equal covariance data, the 3- HO provides equivalent performance to the three-class ideal observer and, under less strict conditions, maximizes the signal to noise ratio for classification of all pairs of the three classes simultaneously. The 3-HO templates are not the eigenvectors obtained from multi-class LDA. Second, we show that the three-class Hotelling trace, which is the figureof- merit in the conventional three-class extension of LDA, has significant limitations. Third, we demonstrate that, under certain conditions, there is a linear relationship between the eigenvectors obtained from multi-class LDA and 3-HO templates. We conclude that the 3-HO based on decision theory has advantages both in its decision theoretic background and in the usefulness of its figure-of-merit. Additionally, there exists the possibility of interpreting the two linear features extracted by the conventional extension of LDA from a decision theoretic point of view.
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-08-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.
Directory of Open Access Journals (Sweden)
John Stephen Yap
2007-06-01
Full Text Available Whether and how thermal reaction norm is under genetic control is fundamental to understand the mechanistic basis of adaptation to novel thermal environments. However, the genetic study of thermal reaction norm is difficult because it is often expressed as a continuous function or curve. Here we derive a statistical model for dissecting thermal performance curves into individual quantitative trait loci (QTL with the aid of a genetic linkage map. The model is constructed within the maximum likelihood context and implemented with the EM algorithm. It integrates the biological principle of responses to temperature into a framework for genetic mapping through rigorous mathematical functions established to describe the pattern and shape of thermal reaction norms. The biological advantages of the model lie in the decomposition of the genetic causes for thermal reaction norm into its biologically interpretable modes, such as hotter-colder, faster-slower and generalist-specialist, as well as the formulation of a series of hypotheses at the interface between genetic actions/interactions and temperature-dependent sensitivity. The model is also meritorious in statistics because the precision of parameter estimation and power of QTLdetection can be increased by modeling the mean-covariance structure with a small set of parameters. The results from simulation studies suggest that the model displays favorable statistical properties and can be robust in practical genetic applications. The model provides a conceptual platform for testing many ecologically relevant hypotheses regarding organismic adaptation within the Eco-Devo paradigm.
Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.
Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik
2014-06-16
Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.
Performance analysis of a compact and low-cost mapping-grade mobile laser scanning system
Julge, Kalev; Vajakas, Toivo; Ellmann, Artu
2017-10-01
The performance of a low-cost, self-contained, compact, and easy to deploy mapping-grade mobile laser scanning (MLS) system, which is composed of a light detection and ranging sensor Velodyne VLP-16 and a dual antenna global navigation satellite system/inertial navigation system SBG Systems Ellipse-D, is analyzed. The field tests were carried out in car-mounted and backpack modes for surveying road engineering structures (such as roads, parking lots, underpasses, and tunnels) and coastal erosion zones, respectively. The impact of applied calculation principles on trajectory postprocessing, direct georeferencing, and the theoretical accuracy of the system is analyzed. A calibration method, based on Bound Optimization BY Quadratic Approximation, for finding the boresight angles of an MLS system is proposed. The resulting MLS point clouds are compared with high-accuracy static terrestrial laser scanning data and survey-grade MLS data from a commercially manufactured MLS system. The vertical, horizontal, and relative accuracy are assessed-the root-mean-square error (RMSE) values were determined to be 8, 15, and 3 cm, respectively. Thus, the achieved mapping-grade accuracy demonstrates that this relatively compact and inexpensive self-assembled MLS can be successfully used for surveying the geometry and deformations of terrain, buildings, road, and other engineering structures.
Khan, Iftekhar; Morris, Stephen
2014-11-12
The performance of the Beta Binomial (BB) model is compared with several existing models for mapping the EORTC QLQ-C30 (QLQ-C30) on to the EQ-5D-3L using data from lung cancer trials. Data from 2 separate non small cell lung cancer clinical trials (TOPICAL and SOCCAR) are used to develop and validate the BB model. Comparisons with Linear, TOBIT, Quantile, Quadratic and CLAD models are carried out. The mean prediction error, R(2), proportion predicted outside the valid range, clinical interpretation of coefficients, model fit and estimation of Quality Adjusted Life Years (QALY) are reported and compared. Monte-Carlo simulation is also used. The Beta-Binomial regression model performed 'best' among all models. For TOPICAL and SOCCAR trials, respectively, residual mean square error (RMSE) was 0.09 and 0.11; R(2) was 0.75 and 0.71; observed vs. predicted means were 0.612 vs. 0.608 and 0.750 vs. 0.749. Mean difference in QALY's (observed vs. predicted) were 0.051 vs. 0.053 and 0.164 vs. 0.162 for TOPICAL and SOCCAR respectively. Models tested on independent data show simulated 95% confidence from the BB model containing the observed mean more often (77% and 59% for TOPICAL and SOCCAR respectively) compared to the other models. All algorithms over-predict at poorer health states but the BB model was relatively better, particularly for the SOCCAR data. The BB model may offer superior predictive properties amongst mapping algorithms considered and may be more useful when predicting EQ-5D-3L at poorer health states. We recommend the algorithm derived from the TOPICAL data due to better predictive properties and less uncertainty.
PERFORMANCE IMPROVEMENT OF IDMA SCHEME USING CHAOTIC MAP INTERLEAVERS FOR FUTURE RADIO COMMUNICATION
Directory of Open Access Journals (Sweden)
Aasheesh Shukla
2017-06-01
Full Text Available In this paper, chaos based interleavers are proposed for the performance improvement of Interleave Division Multiple Access (IDMA henceforth for future radio communication (FRC requirements. ‘IDMA’ can be mean as the modified case of direct sequence code division multiple access (DS-CDMA with the same spreading sequences and user specific interleavers for distinguishing the users in multi-user environment. In IDMA systems, the roles of interleavers are pre-eminent and an efficient interleaver contributes in optimizing the system performance. Random interleaver is the popular and basic taxonomy in IDMA. The performance of chaos based interleavers is compared to random interleaver. Simulation results authenticate the performance of chaos based IDMA. Further the proposed chaotic map-interleavers have the less computational complexity and efficient in bandwidth compared to the existing prevailing interleaver algorithms in the domain. The IDMA system model uses a BPSK modulation and repetition coder with a code rate of ½. The system is simulated in MATLAB and results show the BER superiority of chaotic interleaver based IDMA without the need of extra storage resources and less computational complexity.
Directory of Open Access Journals (Sweden)
Hartini Sri
2018-01-01
Full Text Available Lean manufacturing tools do not consider environmental and societal benefits. The conventional value stream mapping (VSM methodology examines the economics of a manufacturing line, most of which are in regards to time (cycle time, lead time, change-out time, etc.. Incorporating the capability to capture environmental and societal performance visually through VSMs will increase its usefulness as a tool that can be used to assess manufacturing operations from a sustainability perspective. A number of studies have addressed the extension of VSM to incorporate additional criteria. A vast majority of these efforts have focused on adding energy-related metrics to VSMs, while several other studies refer to ‘sustainable’ VSM by including environmental performance in conventional VSMs. This research has developed a method for VSM integrated with environment metric and social metric for ensuring sustainable manufacture. The proposed technique is capable of visualizing and evaluating manufacturing process performance from sustainability view point. The capability of proposed technique has been tested by an application study on furniture company. The study provides insights to practitioners to visualize process performance in economic, environment and social metric.
Maisonny, R.; Ribière, M.; Toury, M.; Plewa, J. M.; Caron, M.; Auriel, G.; d'Almeida, T.
2016-12-01
The performance of a 1 MV pulsed high-power linear transformer driver accelerator were extensively investigated based on a numerical approach which utilizes both electromagnetic and Monte Carlo simulations. Particle-in-cell calculations were employed to examine the beam dynamics throughout the magnetically insulated transmission line which governs the coupling between the generator and the electron diode. Based on the information provided by the study of the beam dynamics, and using Monte Carlo methods, the main properties of the resulting x radiation were predicted. Good agreement was found between these simulations and experimental results. This work provides a detailed understanding of mechanisms affecting the performances of this type of high current, high-voltage pulsed accelerator, which are very promising for a growing number of applications.
International Nuclear Information System (INIS)
Baig, Hasan; Sarmah, Nabin; Chemisana, Daniel; Rosell, Joan; Mallick, Tapas K.
2014-01-01
In the present study, we model and analyse the performance of a dielectric based linear concentrating photovoltaic system using ray tracing and finite element methods. The results obtained are compared with the experiments. The system under study is a linear asymmetric CPC (Compound Parabolic Concentrator) designed to operate under extreme incident angles of 0° and 55° and have a geometrical concentration ratio of 2.8×. Initial experiments showed a maximum PR (power ratio) of 2.2 compared to a non concentrating counterpart. An improvement to this has been proposed and verified by adding a reflective film along the edges of the concentrator to capture the escaping rays and minimise optical losses. The addition of the reflective film changes the incoming distribution on the solar cell. Results show an increase of 16% in the average power output while using this reflective film. On including the thermal effects it was found that the overall benefit changes to about 6% while using a reflective film. Additionally, the effects of the non-uniformity of the incoming radiation are also analysed and reported for both the cases. It is found that adding the reflective film drops the maximum power at the output by only 0.5% due to the effect of non-uniformity. - Highlights: • Optical, thermal and electrical analysis of a concentrating photovoltaic system. • Improvement in performance by use of reflective film along the edge. • Experimental validation of results. • Effects of non-uniform illumination on the performance of the CPV system. • Impact of temperature profile on the overall performance
DOES GENDER EQUALITY LEAD TO BETTER-PERFORMING ECONOMIES? A BAYESIAN CAUSAL MAP APPROACH
Directory of Open Access Journals (Sweden)
Yelda YÜCEL
2017-01-01
Full Text Available This study explores the existence of relationships between gender inequalities –represented by the components of the World Economic Forum (WEF Global Gender Gap Index– and the major macroeconomic indicators. The relationships within gender inequalities in education, the labour market, health and the political arena, and between gender inequalities and gross macroeconomic aggregates were modelled with the Bayesian Causal Map, an effective tool that is used to analyze cause-effect relations and conditional dependencies between variables. A data set of 128 countries during the period 2007–2011 is used. Findings reveal that some inequalities have high levels of interaction with each other. In addition, eradicating gender inequalities is found to be associated with better economic performance, mainly in the form of higher gross domestic product growth, investment, and competitiveness.
Realmuto, Vincent J.; Berk, Alexander
2016-11-01
We describe the development of Plume Tracker, an interactive toolkit for the analysis of multispectral thermal infrared observations of volcanic plumes and clouds. Plume Tracker is the successor to MAP_SO2, and together these flexible and comprehensive tools have enabled investigators to map sulfur dioxide (SO2) emissions from a number of volcanoes with TIR data from a variety of airborne and satellite instruments. Our objective for the development of Plume Tracker was to improve the computational performance of the retrieval procedures while retaining the accuracy of the retrievals. We have achieved a 300 × improvement in the benchmark performance of the retrieval procedures through the introduction of innovative data binning and signal reconstruction strategies, and improved the accuracy of the retrievals with a new method for evaluating the misfit between model and observed radiance spectra. We evaluated the accuracy of Plume Tracker retrievals with case studies based on MODIS and AIRS data acquired over Sarychev Peak Volcano, and ASTER data acquired over Kilauea and Turrialba Volcanoes. In the Sarychev Peak study, the AIRS-based estimate of total SO2 mass was 40% lower than the MODIS-based estimate. This result was consistent with a 45% reduction in the AIRS-based estimate of plume area relative to the corresponding MODIS-based estimate. In addition, we found that our AIRS-based estimate agreed with an independent estimate, based on a competing retrieval technique, within a margin of ± 20%. In the Kilauea study, the ASTER-based concentration estimates from 21 May 2012 were within ± 50% of concurrent ground-level concentration measurements. In the Turrialba study, the ASTER-based concentration estimates on 21 January 2012 were in exact agreement with SO2 concentrations measured at plume altitude on 1 February 2012.
Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions
Energy Technology Data Exchange (ETDEWEB)
Grootjans, Willem [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology; Meeuwis, Antoi P.W.; Gotthardt, Martin; Visser, Eric P. [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Slump, Cornelis H. [Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Geus-Oei, Lioe-Fee de [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology
2016-07-01
Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17 mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4
Use of personal computers in performing a linear modal analysis of a large finite-element model
International Nuclear Information System (INIS)
Wagenblast, G.R.
1991-01-01
This paper presents the use of personal computers in performing a dynamic frequency analysis of a large (2,801 degrees of freedom) finite-element model. Large model linear time history dynamic evaluations of safety related structures were previously restricted to mainframe computers using direct integration analysis methods. This restriction was a result of the limited memory and speed of personal computers. With the advances in memory capacity and speed of the personal computers, large finite-element problems now can be solved in the office in a timely and cost effective manner. Presented in three sections, this paper describes the procedure used to perform the dynamic frequency analysis of the large (2,801 degrees of freedom) finite-element model on a personal computer. Section 2.0 describes the structure and the finite-element model that was developed to represent the structure for use in the dynamic evaluation. Section 3.0 addresses the hardware and software used to perform the evaluation and the optimization of the hardware and software operating configuration to minimize the time required to perform the analysis. Section 4.0 explains the analysis techniques used to reduce the problem to a size compatible with the hardware and software memory capacity and configuration
Optics Design and Performance of an Ultra-Low Emittance Damping Ring for the Compact Linear Collider
Korostelev, M S
2006-01-01
A high-energy (0.5-3.0 TeV centre of mass) electron-positron Compact Linear Collider (CLIC) is being studied at CERN as a new physics facility. The design study has been optimized for 3 TeV centre-of-mass energy. Intense bunches injected into the main linac must have unprecedentedly small emittances to achieve the design luminosity 1035cm-2s-1 required for the physics experiments. The positron and electron bunch trains will be provided by the CLIC injection complex. This thesis describes an optics design and performance of a positron damping ring developed for producing such ultra-low emittance beam. The linear optics of the CLIC damping ring is optimized by taking into account the combined action of radiation damping, quantum excitation and intrabeam scattering. The required beam emittance is obtained by using a TME (Theoretical Minimum Emittance) lattice with compact arcs and short period wiggler magnets located in dispersionfree regions. The damping ring beam energy is chosen as 2.42 GeV. The lattice featu...
Poggiolini, P; Bosco, G; Carena, A; Curri, V; Forghieri, F
2010-05-24
Coherent-detection (CoD) permits to fully exploit the four-dimensional (4D) signal space consisting of the in-phase and quadrature components of the two fiber polarizations. A well-known and successful format exploiting such 4D space is Polarization-multiplexed QPSK (PM-QPSK). Recently, new signal constellations specifically designed and optimized in 4D space have been proposed, among which polarization-switched QPSK (PS-QPSK), consisting of a 8-point constellation at the vertices of a 4D polychoron called hexadecachoron. We call it HEXA because of its geometrical features and to avoid acronym mix-up with PM-QPSK, as well as with other similar acronyms. In this paper we investigate the performance of HEXA in direct comparison with PM-QPSK, addressing non-linear propagation over realistic links made up of 20 spans of either standard single mode fiber (SSMF) or non-zero dispersion-shifted fiber (NZDSF). We show that HEXA not only confirms its theoretical sensitivity advantage over PM-QPSK in back-to-back, but also shows a greater resilience to non-linear effects, allowing for substantially increased span loss margins. As a consequence, HEXA appears as an interesting option for dual-format transceivers capable to switch on-the-fly between PM-QPSK and HEXA when channel propagation degrades. It also appears as a possible direct competitor of PM-QPSK, especially over NZDSF fiber and uncompensated links.
McMonagle, Gerard
2006-01-01
The CERN CTF3 facility is being used to test and demonstrate key technical issues for the CLIC (Compact Linear Collider) study. Pulsed RF power sources are essential elements in this test facility. Klystrons at S-band (29998.55 GHz), in conjunction with pulse compression systems, are used to power the Drive Beam Accelerator (DBA) to achieve an electron beam energy of 150 MeV. The L-Band RF system, includes broadband Travelling Wave Tubes (TWTs) for beam bunching with 'phase coded' sub pulses in the injector and a narrow band high power L-Band klystron powering the transverse 1.5GHz RF deflector in the Delay Loop immediately after the DBA. This paper describes these different systems and discusses their operational performance.
McMonagle, Gerard
2006-01-01
The CERN CTF3 facility is being used to test and demonstrate key technical issues for the CLIC (Compact Linear Collider) study. Pulsed RF power sources are essential elements in this test facility. Klystrons at S-band (29998.55 GHz), in conjunction with pulse compression systems, are used to power the Drive Beam Accelerator (DBA) to achieve an electron beam energy of 150 MeV. The L-Band RF system, includes broadband Travelling Wave Tubes (TWTs) for beam bunching with 'phase coded' sub pulses in the injector and a narrow band high power L-Band klystron powering the transverse 1.5 GHz RF deflector in the Delay Loop immediately after the DBA. This paper describes these different systems and discusses their operational performance.
International Nuclear Information System (INIS)
Nashine, B.K.; Rao, B.P.C.
2014-01-01
Highlights: • Derivation of applicable design equations. • Design of an annular induction pump based on these equations. • Testing of the designed pump in a sodium test facility. • Performance evaluation of the designed pump. - Abstract: Annular linear induction pumps (ALIPs) are used for pumping electrically conducting liquid metals. These pumps find wide application in fast reactors since the coolant in fast reactors is liquid sodium which a good conductor of electricity. The design of these pumps is usually done using equivalent circuit approach in combination with numerical simulation models. The equivalent circuit of ALIP is similar to that of an induction motor. This paper presents the derivation of equivalent circuit parameters using first principle approach. Sodium testing of designed ALIP using the equivalent circuit approach is also described and experimental results of the testing are presented. Comparison between experimental and analytical calculations has also been carried out. Some of the reasons for variation have also been listed in this paper
Directory of Open Access Journals (Sweden)
Murray L. Ireland
2015-06-01
Full Text Available Multirotor is the umbrella term for the family of unmanned aircraft, which include the quadrotor, hexarotor and other vertical take-off and landing (VTOL aircraft that employ multiple main rotors for lift and control. Development and testing of novel multirotor designs has been aided by the proliferation of 3D printing and inexpensive flight controllers and components. Different multirotor configurations exhibit specific strengths, while presenting unique challenges with regards to design and control. This article highlights the primary differences between three multirotor platforms: a quadrotor; a fully-actuated hexarotor; and an octorotor. Each platform is modelled and then controlled using non-linear dynamic inversion. The differences in dynamics, control and performance are then discussed.
Numerical investigation and performance characteristic mapping of an Archimedean screw hydroturbine
Schleicher, W. Chris
Computational Fluid Dynamics (CFD) is a crucial tool in the design and analysis of hydraulic machinery, especially in the design of a micro hydro turbine. The micro hydro turbine in question is for a low head (less than 60 meters), low volumetric flow rate (0.005 m3/s to 0.5 m 3/s) application with rotation rates varying from 200 RPM to 1500 RPM. The design of the runner geometry is discussed, specifically a non-uniform Archimedean Spiral with an outer diameter of 6 inches and length of 19.5 inches. The transient simulation method, making use of a frame of reference change and a rotating mesh between time-steps, is explained as well as the corresponding boundary conditions. Both simulation methods are compared and are determined to produce similar results. The rotating frame of reference method was determined to be the most suitable method for the mapping of performance characteristic such as required head, torque, power, and efficiency. Results of simulations for a non-uniform Archimedean Spiral are then presented. First, a spectral and temporal convergence study is conducted to make sure that the results are independent of time-step and mesh selection. Performance characteristics of a non-uniform pitched blade turbine are determined for a wide range of volumetric flow rates and rotation rates. The maximum efficiency of the turbine is calculated around 72% for the design of the turbine blade considered in the present study.
Concept mapping improves academic performance in problem solving questions in biochemistry subject.
Baig, Mukhtiar; Tariq, Saba; Rehman, Rehana; Ali, Sobia; Gazzaz, Zohair J
2016-01-01
To assess the effectiveness of concept mapping (CM) on the academic performance of medical students' in problem-solving as well as in declarative knowledge questions and their perception regarding CM. The present analytical and questionnaire-based study was carried out at Bahria University Medical and Dental College (BUMDC), Karachi, Pakistan. In this analytical study, students were assessed with problem-solving questions (A-type MCQs), and declarative knowledge questions (short essay questions), and 50% of the questions were from the topics learned by CM. Students also filled a 10-item, 3-point Likert scale questionnaire about their perception regarding the effectiveness of the CM approach, and two open-ended questions were also asked. There was a significant difference in the marks obtained in those problem-solving questions, which were learned by CM as compared to those topics which were taught by the traditional lectures (pacademic performance in problem solving but not in declarative knowledge questions. Students' perception about the effectiveness of CM was overwhelmingly positive.
Mapping performance of the fishery industries innovation: A survey in the North Coast of Java
Yusuf, M.; Legowo, A. M.; Albaarri, A. N.; Darmanto, Y. S.; Agustini, T. W.; Setyastuti, A. I.
2018-01-01
This study aimed to put the performance indicators of industry innovation fisheries which would be used as inputs to create innovation strategies in order to win the market competition, especially in USA. Survey and in-depth interviews were conducted on 10 industries with shrimp, tuna and crab commodities representing the fishery industry in Indonesia to USA export destination. The result of mapping performance of innovation indicators of Indonesian Fishery Industry resulted the 10’s strategies alternative to win the market. Based on survey result indicate that “the regulation of catch and/or harvest of cultivation factor” is considered the weakest factor in develop innovation with a score of 3.3, while the international trade factor are considered as the strongest factor in developing innovation development with scores 5,0. Aggressive strategy by strengthening the strength owned by the internal industry was by always looking at the opportunity, so that they could take the opportunity to win the market competition at the right time.
Directory of Open Access Journals (Sweden)
Afsaneh Mozaffari
2013-04-01
Full Text Available During the past few years, there have been extensive developments on Islamic Azad University, which has led on reduction of managerial flexibility. Therefore, these organizations concentrate on their strategic management via usage of the Balanced Model such as Balanced Score Card (BSC to consider different organizational perspectives and it is important to have good description of organizational strategies and goals. The strategy map is a primary factor to assess the performance in different organizational activities. In this paper, the performance evaluation system of Islamic Azad University of Semnan is designed by the utilization of strategy map as a prominent part of BSC.
International Nuclear Information System (INIS)
FOGWELL, T.W.; LAST, G.V.
2003-01-01
The estimation of flux of contaminants through the vadose zone to the groundwater under varying geologic, hydrologic, and chemical conditions is key to making technically credible and sound decisions regarding soil site characterization and remediation, single-shell tank retrieval, and waste site closures (DOE 2000). One of the principal needs identified in the science and technology roadmap (DOE 2000) is the need to improve the conceptual and numerical models that describe the location of contaminants today, and to provide the basis for forecasting future movement of contaminants on both site-specific and site-wide scales. The State of Knowledge (DOE 1999) and Preliminary Concepts documents describe the importance of geochemical processes on the transport of contaminants through the Vadose Zone. These processes have been identified in the international list of Features, Events, and Processes (FEPs) (NEA 2000) and included in the list of FEPS currently being developed for Hanford Site assessments (Soler et al. 2001). The current vision for Hanford site-wide cumulative risk assessments as performed using the System Assessment Capability (SAC) is to represent contaminant adsorption using the linear isotherm (empirical distribution coefficient, K d ) sorption model. Integration Project Expert Panel (PEP) comments indicate that work is required to adequately justify the applicability of the linear sorption model, and to identify and defend the range of K d values that are adopted for assessments. The work plans developed for the Science and Technology (S and T) efforts, SAC, and the Core Projects must answer directly the question of ''Is there a scientific basis for the application of the linear sorption isotherm model to the complex wastes of the Hanford Site?'' This paper is intended to address these issues. The reason that well documented justification is required for using the linear sorption (K d ) model is that this approach is strictly empirical and is often
An Examination of the Effects of Argument Mapping on Students' Memory and Comprehension Performance
Dwyer, Christopher P.; Hogan, Michael J.; Stewart, Ian
2013-01-01
Argument mapping (AM) is a method of visually diagramming arguments to allow for easy comprehension of core statements and relations. A series of three experiments compared argument map reading and construction with hierarchical outlining, text summarisation, and text reading as learning methods by examining subsequent memory and comprehension…
Machine vision-based high-resolution weed mapping and patch-sprayer performance simulation
Tang, L.; Tian, L.F.; Steward, B.L.
1999-01-01
An experimental machine vision-based patch-sprayer was developed. This sprayer was primarily designed to do real-time weed density estimation and variable herbicide application rate control. However, the sprayer also had the capability to do high-resolution weed mapping if proper mapping techniques
Supporting Problem-Solving Performance Through the Construction of Knowledge Maps
Lee, Youngmin; Baylor, Amy L.; Nelson, David W.
2005-01-01
The purpose of this article is to provide five empirically-derived guidelines for knowledge map construction tools that facilitate problem solving. First, the combinational representation principle proposes that conceptual and corresponding procedural knowledge should be represented together (rather than separately) within the knowledge map.…
Directory of Open Access Journals (Sweden)
Sabrina Sicari
2017-01-01
Full Text Available Many solutions based on machine learning techniques have been proposed in literature aimed at detecting and promptly counteracting various kinds of malicious attack (data violation, clone, sybil, neglect, greed, and DoS attacks, which frequently affect Wireless Sensor Networks (WSNs. Besides recognizing the corrupted or violated information, also the attackers should be identified, in order to activate the proper countermeasures for preserving network’s resources and to mitigate their malicious effects. To this end, techniques adopting Self-Organizing Maps (SOM for intrusion detection in WSN were revealed to represent a valuable and effective solution to the problem. In this paper, the mechanism, namely, Good Network (GoNe, which is based on SOM and is able to assess the reliability of the sensor nodes, is compared with another relevant and similar work existing in literature. Extensive performance simulations, in terms of nodes’ classification, attacks’ identification, data accuracy, energy consumption, and signalling overhead, have been carried out in order to demonstrate the better feasibility and efficiency of the proposed solution in WSN field.
Use of geological mapping tools to improve the hydraulic performance of SuDS.
Bockhorn, Britta; Klint, Knud Erik Strøyberg; Jensen, Marina Bergen; Møller, Ingelise
2015-01-01
Most cities in Denmark are situated on low permeable clay rich deposits. These sediments are of glacial origin and range among the most heterogeneous, with hydraulic conductivities spanning several orders of magnitude. This heterogeneity has obvious consequences for the sizing of sustainable urban drainage systems (SuDS). We have tested methods to reveal geological heterogeneity at field scale to identify the most suitable sites for the placement of infiltration elements and to minimize their required size. We assessed the geological heterogeneity of a clay till plain in Eastern Jutland, Denmark measuring the shallow subsurface resistivity with a geoelectrical multi-electrode system. To confirm the resistivity data we conducted a spear auger mapping. The exposed sediments ranged from clay tills over sandy clay tills to sandy tills and correspond well to the geoelectrical data. To verify the value of geological information for placement of infiltration elements we carried out a number of infiltration tests on geologically different areas across the field, and we observed infiltration rates two times higher in the sandy till area than in the clay till area, thus demonstrating that the hydraulic performance of SuDS can be increased considerably and oversizing avoided if field geological heterogeneity is revealed before placing SuDS.
elPrep: High-Performance Preparation of Sequence Alignment/Map Files for Variant Calling.
Directory of Open Access Journals (Sweden)
Charlotte Herzeel
Full Text Available elPrep is a high-performance tool for preparing sequence alignment/map files for variant calling in sequencing pipelines. It can be used as a replacement for SAMtools and Picard for preparation steps such as filtering, sorting, marking duplicates, reordering contigs, and so on, while producing identical results. What sets elPrep apart is its software architecture that allows executing preparation pipelines by making only a single pass through the data, no matter how many preparation steps are used in the pipeline. elPrep is designed as a multithreaded application that runs entirely in memory, avoids repeated file I/O, and merges the computation of several preparation steps to significantly speed up the execution time. For example, for a preparation pipeline of five steps on a whole-exome BAM file (NA12878, we reduce the execution time from about 1:40 hours, when using a combination of SAMtools and Picard, to about 15 minutes when using elPrep, while utilising the same server resources, here 48 threads and 23GB of RAM. For the same pipeline on whole-genome data (NA12878, elPrep reduces the runtime from 24 hours to less than 5 hours. As a typical clinical study may contain sequencing data for hundreds of patients, elPrep can remove several hundreds of hours of computing time, and thus substantially reduce analysis time and cost.
Pressure mapping and performance of the compression bandage/garment for venous leg ulcer treatment.
Ghosh, S; Mukhopadhyay, A; Sikka, M; Nagla, K S
2008-08-01
A study has been conducted on the commercially available compression bandages as regards their performance with time. Pressure mapping of these bandages has been done using a fabricated pressure-measuring device on a mannequin leg to see the effect on pressure due to creep, fabric friction and angle of bandaging. The results show that the creep behavior, frictional behavior and the angle of bandaging have a significant effect on the pressure profile generated by the bandages during application. The regression analysis shows that the surface friction restricts the slippage in a multilayer system. Also the diameters of the limb and the amount of stretch given to the bandage during application have definite impact on the bandage pressure. In case of compression garments, washing improves the pressure generated but not to the extent of the pressure of a virgin garment. Comparing the two compression materials i.e. bandage and garment, it is found that the presence of higher percentage of elastomeric material and a highly close construction in case of garment provides better holding power and a more homogeneous pressure distribution.
Regular Routes: Deep Mapping a Performative Counterpractice for the Daily Commute 1
Directory of Open Access Journals (Sweden)
Laura Bissell
2015-09-01
Full Text Available This article offers a textual “deep map” of a series of experimental commutes undertaken in the west of Scotland in 2014. Recent developments in the field of transport studies have reconceived travel time as a far richer cultural experience than in previously utilitarian and economic approaches to the “problem” of commuting. Understanding their own commutes in these terms—as spaces of creativity, productivity and transformation—the authors trace the development of a performative “counterpractice” for their daily journeys between home and work. Deep mapping—as a form of “theory-informed story-telling”—is employed as a productive strategy to document this reimagination of ostensibly quotidian and functional travel. Importantly, this particular stage of the project is not presented as an end-point. Striving to develop an ongoing creative engagement with landscape, the authors continue this exploratory mobile research by connecting to other commuters’ journeys, and proposing a series of “strategies” for reimagining the daily commute; a list of prompts for future action within the routines and spaces of commuting. A range of alternative approaches to commuting are offered here to anyone who regularly travels to and from work to employ or develop as they wish, extending the mapping process to other routes and contexts.
Performance of Partial and Cavity Type Squealer Tip of a HP Turbine Blade in a Linear Cascade
Directory of Open Access Journals (Sweden)
Levent Kavurmacioglu
2018-01-01
Full Text Available Three-dimensional highly complex flow structure in tip gap between blade tip and casing leads to inefficient turbine performance due to aerothermal loss. Interaction between leakage vortex and secondary flow structures is the substantial source of that loss. Different types of squealer tip geometries were tried in the past, in order to improve turbine efficiency. The current research deals with comparison of partial and cavity type squealer tip concepts for higher aerothermal performance. Effects of squealer tip have been examined comprehensively for an unshrouded HP turbine blade tip geometry in a linear cascade. In the present paper, flow structure through the tip gap was comprehensively investigated by computational fluid dynamic (CFD methods. Numerical calculations were obtained by solving three-dimensional, incompressible, steady, and turbulent form of the Reynolds-averaged Navier-Stokes (RANS equations using a general purpose and three-dimensional viscous flow solver. The two-equation turbulence model, shear stress transport (SST, has been used. The tip profile belonging to the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF was used to create an extruded solid model of the axial turbine blade. For identifying optimal dimensions of squealer rim in terms of squealer height and squealer width, our previous studies about aerothermal investigation of cavity type squealer tip were utilized. In order to obtain the mesh, an effective parametric generation has been utilized using a multizone structured mesh. Numerical calculations indicate that partial and cavity squealer designs can be effective to reduce the aerodynamic loss and heat transfer to the blade tip. Future efforts will include novel squealer shapes for higher aerothermal performance.
Performance of non-parametric algorithms for spatial mapping of tropical forest structure
Directory of Open Access Journals (Sweden)
Liang Xu
2016-08-01
Full Text Available Abstract Background Mapping tropical forest structure is a critical requirement for accurate estimation of emissions and removals from land use activities. With the availability of a wide range of remote sensing imagery of vegetation characteristics from space, development of finer resolution and more accurate maps has advanced in recent years. However, the mapping accuracy relies heavily on the quality of input layers, the algorithm chosen, and the size and quality of inventory samples for calibration and validation. Results By using airborne lidar data as the “truth” and focusing on the mean canopy height (MCH as a key structural parameter, we test two commonly-used non-parametric techniques of maximum entropy (ME and random forest (RF for developing maps over a study site in Central Gabon. Results of mapping show that both approaches have improved accuracy with more input layers in mapping canopy height at 100 m (1-ha pixels. The bias-corrected spatial models further improve estimates for small and large trees across the tails of height distributions with a trade-off in increasing overall mean squared error that can be readily compensated by increasing the sample size. Conclusions A significant improvement in tropical forest mapping can be achieved by weighting the number of inventory samples against the choice of image layers and the non-parametric algorithms. Without future satellite observations with better sensitivity to forest biomass, the maps based on existing data will remain slightly biased towards the mean of the distribution and under and over estimating the upper and lower tails of the distribution.
Ryan, Jason C; Banerjee, Ashis Gopal; Cummings, Mary L; Roy, Nicholas
2014-06-01
Planning operations across a number of domains can be considered as resource allocation problems with timing constraints. An unexplored instance of such a problem domain is the aircraft carrier flight deck, where, in current operations, replanning is done without the aid of any computerized decision support. Rather, veteran operators employ a set of experience-based heuristics to quickly generate new operating schedules. These expert user heuristics are neither codified nor evaluated by the United States Navy; they have grown solely from the convergent experiences of supervisory staff. As unmanned aerial vehicles (UAVs) are introduced in the aircraft carrier domain, these heuristics may require alterations due to differing capabilities. The inclusion of UAVs also allows for new opportunities for on-line planning and control, providing an alternative to the current heuristic-based replanning methodology. To investigate these issues formally, we have developed a decision support system for flight deck operations that utilizes a conventional integer linear program-based planning algorithm. In this system, a human operator sets both the goals and constraints for the algorithm, which then returns a proposed schedule for operator approval. As a part of validating this system, the performance of this collaborative human-automation planner was compared with that of the expert user heuristics over a set of test scenarios. The resulting analysis shows that human heuristics often outperform the plans produced by an optimization algorithm, but are also often more conservative.
Du, Junwei; Bai, Xiaowei; Gola, Alberto; Acerbi, Fabio; Ferri, Alessandro; Piemonte, Claudio; Yang, Yongfeng; Cherry, Simon R.
2018-02-01
The goal of this study was to exploit the excellent spatial resolution characteristics of a position-sensitive silicon photomultiplier (SiPM) and develop a high-resolution depth-of-interaction (DOI) encoding positron emission tomography (PET) detector module. The detector consists of a 30 × 30 array of 0.445 × 0.445 × 20 mm3 polished LYSO crystals coupled to two 15.5 × 15.5 mm2 linearly-graded SiPM (LG-SiPM) arrays at both ends. The flood histograms show that all the crystals in the LYSO array can be resolved. The energy resolution, the coincidence timing resolution and the DOI resolution were 21.8 ± 5.8%, 1.23 ± 0.10 ns and 3.8 ± 1.2 mm, respectively, at a temperature of -10 °C and a bias voltage of 35.0 V. The performance did not degrade significantly for event rates of up to 130 000 counts s-1. This detector represents an attractive option for small-bore PET scanner designs that simultaneously emphasize high spatial resolution and high detection efficiency, important, for example, in preclinical imaging of the rodent brain with neuroreceptor ligands.
International Nuclear Information System (INIS)
Amelio, Mario; Ferraro, Vittorio; Marinelli, Valerio; Summaria, Antonio
2014-01-01
An evaluation of the performance of an innovative solar system integrated in a combined cycle plant is presented, in which the heat transfer fluid flowing in linear parabolic collectors is the same oxidant air that is introduced into the combustion chamber of the plant. This peculiarity allows a great simplification of the plant. There is a 22% saving of fossil fuel results in design conditions and 15.5% on an annual basis, when the plant works at nominal volumetric flow rate in the daily hours. The net average year efficiency is 60.9% against the value of 51.4% of a reference combined cycle plant without solar integration. Moreover, an economic evaluation of the plant is carried out, which shows that the extra-cost of the solar part is recovered in about 5 years. - Highlights: • A model to calculate an innovative ISCCS (Integrated solar Combined Cycle Systems) solar plant is presented. • The plant uses air as heat transfer fluid as well as oxidant in the combustor. • The plant presents a very high thermodynamic efficiency. • The plant is very simple in comparison with existing ISCCS
Marques, R.; Amaral, P.; Zêzere, J. L.; Queiroz, G.; Goulart, C.
2009-04-01
Slope instability research and susceptibility mapping is a fundamental component of hazard assessment and is of extreme importance for risk mitigation, land-use management and emergency planning. Landslide susceptibility zonation has been actively pursued during the last two decades and several methodologies are still being improved. Among all the methods presented in the literature, indirect quantitative probabilistic methods have been extensively used. In this work different linear probabilistic methods, both bi-variate and multi-variate (Informative Value, Fuzzy Logic, Weights of Evidence and Logistic Regression), were used for the computation of the spatial probability of landslide occurrence, using the pixel as mapping unit. The methods used are based on linear relationships between landslides and 9 considered conditioning factors (altimetry, slope angle, exposition, curvature, distance to streams, wetness index, contribution area, lithology and land-use). It was assumed that future landslides will be conditioned by the same factors as past landslides in the study area. The presented work was developed for Ribeira Quente Valley (S. Miguel Island, Azores), a study area of 9,5 km2, mainly composed of volcanic deposits (ash and pumice lapilli) produced by explosive eruptions in Furnas Volcano. This materials associated to the steepness of the slopes (38,9% of the area has slope angles higher than 35°, reaching a maximum of 87,5°), make the area very prone to landslide activity. A total of 1.495 shallow landslides were mapped (at 1:5.000 scale) and included in a GIS database. The total affected area is 401.744 m2 (4,5% of the study area). Most slope movements are translational slides frequently evolving into debris-flows. The landslides are elongated, with maximum length generally equivalent to the slope extent, and their width normally does not exceed 25 m. The failure depth rarely exceeds 1,5 m and the volume is usually smaller than 700 m3. For modelling
Matthews, Lynn D.; Cappallo, R. J.; Doeleman, S. S.; Fish, V. L.; Lonsdale, C. J.; Oberoi, D.; Wayth, R. B.
2009-05-01
The Square Kilometer Array (SKA) is a proposed next-generation radio telescope that will operate at frequencies of 0.1-30 GHz and be 50-100 times more sensitive than existing radio arrays. Meeting the performance goals of this instrument will require innovative new hardware and software developments, a variety of which are now under consideration. Key to evaluating the performance characteristics of proposed SKA designs and testing the feasibility of new data calibration and processing algorithms is the ability to carry out realistic simulations of radio wavelength arrays under a variety of observing conditions. The MIT Array Performance Simulator (MAPS) (http://www.haystack.mit.edu/ast/arrays/maps/index.html) is an observations simulation package designed to achieve this goal. MAPS accepts an input source list or sky model and generates a model visibility set for a user-defined "virtual observatory'', incorporating such factors as array geometry, primary beam shape, field-of-view, and time and frequency resolution. Optionally, effects such as thermal noise, out-of-beam sources, variable station beams, and time/location-dependent ionospheric effects can be included. We will showcase current capabilities of MAPS for SKA applications by presenting results from an analysis of the effects of realistic sky backgrounds on the achievable image fidelity and dynamic range of SKA-like arrays comprising large numbers of small-diameter antennas.
Directory of Open Access Journals (Sweden)
Maurizio Bevilacqua
2014-09-01
Full Text Available Purpose: This work investigates the influence of project managers’ personality on the success of a project in a Multinational Corporation. The methodology proposed for analyzing the project managers’ personality is based on the Myers-Briggs Type Indicator.Design/methodology/approach: Forty projects carried out in 2012 by multinational corporation, concerning new product development (NPD, have been analyzed, comparing the profile of project managers with results obtained in terms of traditional performance indexes (time delay and over-budget of projects and performance indexes usually used in “Lean Production” sector (waste time and type of “wastes”. A detailed analysis of the most important “wastes” during the project development is carried out using the Value Stream Mapping (VSM technique.Findings and Originality/value: Relying on the Myers–Briggs personality instrument, results show that extroverted managers (as opposed to introverted managers carry out projects that show lower delay and lower waste time. Introverted managers often make “Over-processing” and “Defect” types of waste. Moreover, lower delay and over-budget have been shown by perceiving managers.Research limitations: Regarding the limitations of this work it is necessary to highlight that we collected data from project managers in a retrospective way. While we believe that several aspects of our data collection effort helped enhance the accuracy of the results, future research could conduct real-time case study research to get more detailed insights into the proposed relationships and avoid retrospective bias. Moreover we focused on a single respondent, the project manager. This helped us ensure that their interpretations played an important role in product development. But, we cannot examined the opinion of team members that could be different from project managers opinion regarding some questions.Originality/value: This research provides insight useful
Achieving high signal-to-noise performance for a velocity-map imaging experiment
International Nuclear Information System (INIS)
Roberts, E.H.; Cavanagh, S.J.; Gibson, S.T.; Lewis, B.R.; Dedman, C.J.; Picker, G.J.
2005-01-01
Since the publication of the pioneering paper on velocity-map imaging in 1997, by Eppink and Parker [A.T.J.B. Eppink, D.H. Parker, Rev. Sci. Instrum. 68 (1997) 3477], numerous groups have applied this method in a variety of ways and to various targets. However, despite this interest, little attention has been given to the inherent difficulties and problems associated with this method. In implementing a velocity-map imaging system for photoelectron spectroscopy for the photo-detachment of anion radicals, we have developed a coaxial velocity-map imaging spectrometer. Examined are the advantages and disadvantages of such a system, in particular the sources of noise and the methods used to reduce it
Directory of Open Access Journals (Sweden)
Dániel Balla
2015-01-01
Full Text Available Almost every component of the information society is influenced by elements built on communication technology. Learning also tends to be related to the dynamic usage of computers. Nowadays, a number of applications (online or offline are also available that engage large groups of potential users and simultaneously provide a virtual environment to facilitate learning. This study introduces the self-developed interactive blind map teaching-examining e-learning system of the University of Debrecen. Results of testing the system with a control group are also presented.Both experimental and control groups of students were required to sita test of topographic knowledge following a semester of study. The pass mark for the test was 80%. The experimental group used the new digital environment to study, while the control group prepared for their exam using paper maps in the traditional way. The key research questions addressed by the study were to determine whether exam results obtained by the group using the ‘digital’ method better than those of the control's; and if there were a difference between the exam performances of the two groups, was this statistically significant and, therefore, likely to occur in other similar scenarios?
Use of geological mapping tools to improve the hydraulic performance of SuDS
DEFF Research Database (Denmark)
Bockhorn, Britta; Klint, Knud Erik; Jensen, Marina Bergen
2015-01-01
measuring the shallow subsurface resistivity with a geoelectrical multi-electrode system. To confirm the resistivity data we conducted a spear auger mapping. The exposed sediments ranged from clay tills over sandy clay tills to sandy tills and correspond well to the geoelectrical data. To verify the value...
Jiao, Jiao; Li, Yi; Yao, Lei; Chen, Yajun; Guo, Yueping; Wong, Stephen H S; Ng, Frency S F; Hu, Junyan
2017-10-01
To investigate clothing-induced differences in human thermal response and running performance, eight male athletes participated in a repeated-measure study by wearing three sets of clothing (CloA, CloB, and CloC). CloA and CloB were body-mapping-designed with 11% and 7% increased capacity of heat dissipation respectively than CloC, the commonly used running clothing. The experiments were conducted by using steady-state running followed by an all-out performance running in a controlled hot environment. Participants' thermal responses such as core temperature (T c ), mean skin temperature ([Formula: see text]), heat storage (S), and the performance running time were measured. CloA resulted in shorter performance time than CloC (323.1 ± 10.4 s vs. 353.6 ± 13.2 s, p = 0.01), and induced the lowest [Formula: see text], smallest ΔT c , and smallest S in the resting and running phases. This study indicated that clothing made with different heat dissipation capacities affects athlete thermal responses and running performance in a hot environment. Practitioner Summary: A protocol that simulated the real situation in running competitions was used to investigate the effects of body-mapping-designed clothing on athletes' thermal responses and running performance. The findings confirmed the effects of optimised clothing with body-mapping design and advanced fabrics, and ensured the practical advantage of developed clothing on exercise performance.
International Nuclear Information System (INIS)
Wang Wei; Li Na; Ren Yuzhou; Li Hao; Zheng Lifen; Li Jin; Jiang Junjie; Chen Xiaoping; Wang Kai; Xia Chunping
2013-01-01
The effects of linear doping profile near the source and drain contacts on the switching and high-frequency characteristics for conventional single-material-gate CNTFET (C-CNTFET) and hetero-material-gate CNTFET (HMG-CNTFET) have been theoretically investigated by using a quantum kinetic model. This model is based on two-dimensional non-equilibrium Green's functions (NEGF) solved self-consistently with Poisson's equations. The simulation results show that at a CNT channel length of 20 nm with chirality (7, 0), the intrinsic cutoff frequency of C-CNTFETs reaches up to a few THz. In addition, a comparison study has been performed between C-and HMG-CNTFETs. For the C-CNTFET, results reveal that a longer linear doping length can improve the cutoff frequency and switching speed. However, it has the reverse effect on on/off current ratios. To improve the on/off current ratios performance of CNTFETs and overcome short-channel effects (SCEs) in high-performance device applications, a novel CNTFET structure with a combination of an HMG and linear doping profile has been proposed. It is demonstrated that the HMG structure design with an optimized linear doping length has improved high-frequency and switching performances as compared to C-CNTFETs. The simulation study may be useful for understanding and optimizing high-performance of CNTFETs and assessing the reliability of CNTFETs for prospective applications. (semiconductor devices)
Kim, Hana; Youk, Ji Hyun; Gweon, Hye Mi; Kim, Jeong-Ah; Son, Eun Ju
2013-08-01
To compare the diagnostic performance of qualitative shear-wave elastography (SWE) according to three different color map opacities for breast masses 101 patients aged 21-77 years with 113 breast masses underwent B-mode US and SWE under three different color map opacities (50%, 19% and 100%) before biopsy or surgery. Following SWE features were reviewed: visual pattern classification (pattern 1-4), color homogeneity (Ehomo) and six-point color score of maximum elasticity (Ecol). Combined with B-mode US and SWE, the likelihood of malignancy (LOM) was also scored. The area under the curve (AUC) was obtained by ROC curve analysis to assess the diagnostic performance under each color opacity. A visual color pattern, Ehomo, Ecol and LOM scoring were significantly different between benign and malignant lesions under all color opacities (Pbreast lesion under all color opacities. The difference in color map opacity did not significantly influence diagnostic performance of SWE. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Pandey, Gaurav; Goel, Aditya
2017-12-01
In this paper, orthogonal frequency division multiplexing (OFDM)-passive optical network (PON) downstream transmission is demonstrated over different lengths of fiber at remote node (RN) for different m-QAM (quadrature amplitude modulation)-mapped OFDM signal (m=4, 16, 32 and 64) transmission from the central office (CO) for different data rates (10, 20 30 and 40 Gbps) using coherent detection at the user end or optical network unit (ONU). Investigation is performed with different number of subcarriers (32, 64, 128, 512 and 1,024), back-to-back optical signal-to-noise ratio (OSNR) along with transmitted and received constellation diagrams for m-QAM-mapped coherent OFDM downstream transmission at different speeds over different transmission distances. Received optical power is calculated for different bit error rates (BERs) at different speeds using m-QAM-mapped coherent detection OFDM downstream transmission. No dispersion compensation is utilized in between the fiber span. Simulation results suggest the different lengths and data rates that can be used for different m-QAM-mapped coherent detection OFDM downstream transmission, and the proposed system may be implemented in next-generation high-speed PONs (NG-PONs).
Blyth, T S
2002-01-01
Basic Linear Algebra is a text for first year students leading from concrete examples to abstract theorems, via tutorial-type exercises. More exercises (of the kind a student may expect in examination papers) are grouped at the end of each section. The book covers the most important basics of any first course on linear algebra, explaining the algebra of matrices with applications to analytic geometry, systems of linear equations, difference equations and complex numbers. Linear equations are treated via Hermite normal forms which provides a successful and concrete explanation of the notion of linear independence. Another important highlight is the connection between linear mappings and matrices leading to the change of basis theorem which opens the door to the notion of similarity. This new and revised edition features additional exercises and coverage of Cramer's rule (omitted from the first edition). However, it is the new, extra chapter on computer assistance that will be of particular interest to readers:...
Velazquez-Marti, B.; Annevelink, E.
2008-01-01
Many linear programming models have been developed to model the logistics of bio-energy chains. These models help to determine the best set-up of bio-energy chains. Most of them use network structures built up from nodes with one or more depots, and arcs connecting these depots. Each depot is source
Velazquez-Marti, B.; Annevelink, E.
2008-01-01
Many linear programming models have been developed to model the logistics of bio-energy chains. These models help to determine the best set-up of bio-energy chains. Most of them use network structures built up from nodes with one or more depots, and arcs connecting these depots. Each depot is source
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
A Three-Dimensional Foil Bearing Performance Map Applied to Oil-Free Turbomachinery
2009-04-01
stress brought on by excessive viscous power loss; therefore a map that graphically relates component and system-level parameters (bearing size, applied...Introduction Foil bearings are self-acting, hydrodynamic gas bearings that use air as their working fluid . Their use in rotating systems eliminates the...weight, maintenance requirements, speed, and temperature limitations associated with conventional oil-lubricated rotor supports (i.e., bearings, dampers
International Nuclear Information System (INIS)
Liaparinos, P.; Kalyvas, N.; Kandarakis, I.; Cavouras, D.
2013-01-01
Purpose: The purpose of this study was to provide an analysis of imaging performance in digital mammography, using indirect detector instrumentation, by combining the Linear Cascaded Systems (LCS) theory and the Signal Detection Theory (SDT). Observer performance was assessed, by examining frequently employed detectors, consisting of phosphor-based X-ray converters (granular Gd 2 O 2 S:Tb and structural CsI:Tl), coupled with the recently introduced complementary metal-oxide-semiconductor (CMOS) sensor. By applying combinations of various irradiation conditions (filter-target and exposure levels at 28 kV) on imaging detectors, our study aimed to find the optimum system set-up for digital mammography. For this purpose, the signal to noise transfer properties of the medical imaging detectors were examined for breast carcinoma detectability. Methods: An analytical model was applied to calculate X-ray interactions within software breast phantoms and detective media. Modeling involved: (a) three X-ray spectra used in digital mammography: 28 kV Mo/Mo (Mo: 0.030 mm), 28 kV Rh/Rh (Rh: 0.025 mm) and 28 kV W/Rh (Rh: 0.060 mm) at different entrance surface air kerma (ESAK) of 3 mGy and 5 mGy, (b) a 5 cm thick Perspex software phantom incorporating a small Ca lesion of varying size (0.1–1 cm), and (c) two 200 μm thick phosphor-based X-ray converters (Gd2O2S:Tb, CsI:Tl), coupled to a CMOS based detector of 22.5 μm pixel size. Results: Best (lowest) contrast threshold (CT) values were obtained with the combination: (i) W/Rh target-filter, (ii) 5 mGy (ESAK), and (iii) CsI:Tl-CMOS detector. For lesion diameter 0.5 cm the CT was found improved, in comparison to other anode/filter combinations, approximately 42% than Rh/Rh and 55% than Mo/Mo, for small sized carcinoma (0.1 cm) and approximately 50% than Rh/Rh and 125% than Mo/Mo, for big sized carcinoma (1 cm), considering 5 mGy X-ray beam. By decreasing lesion diameter and thickness, a limiting CT (100%) was occurred for size
DEFF Research Database (Denmark)
Gersborg-Hansen, Allan; Dammann, Bernd; Aage, Niels
concerned with developing a proper (COMSOL) model rather than developing efficient linear algebra solvers which motivates this investigation of the efficiency of the coupling COMSOL + SPL. The technicalities of making such a coupling is described in detail along with a measure of the speedup...
Kissi, Philip Siaw; Opoku, Gyabaah; Boateng, Sampson Kwadwo
2016-01-01
The aim of the study was to investigate the effect of Microsoft Math Tool (graphical calculator) on students' achievement in the linear function. The study employed Quasi-experimental research design (Pre-test Post-test two group designs). A total of ninety-eight (98) students were selected for the study from two different Senior High Schools…
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
An evaluation of the performance of tag SNPs derived from HapMap in a Caucasian population.
Directory of Open Access Journals (Sweden)
Alexandre Montpetit
2006-03-01
Full Text Available The Haplotype Map (HapMap project recently generated genotype data for more than 1 million single-nucleotide polymorphisms (SNPs in four population samples. The main application of the data is in the selection of tag single-nucleotide polymorphisms (tSNPs to use in association studies. The usefulness of this selection process needs to be verified in populations outside those used for the HapMap project. In addition, it is not known how well the data represent the general population, as only 90-120 chromosomes were used for each population and since the genotyped SNPs were selected so as to have high frequencies. In this study, we analyzed more than 1,000 individuals from Estonia. The population of this northern European country has been influenced by many different waves of migrations from Europe and Russia. We genotyped 1,536 randomly selected SNPs from two 500-kbp ENCODE regions on Chromosome 2. We observed that the tSNPs selected from the CEPH (Centre d'Etude du Polymorphisme Humain from Utah (CEU HapMap samples (derived from US residents with northern and western European ancestry captured most of the variation in the Estonia sample. (Between 90% and 95% of the SNPs with a minor allele frequency of more than 5% have an r2 of at least 0.8 with one of the CEU tSNPs. Using the reverse approach, tags selected from the Estonia sample could almost equally well describe the CEU sample. Finally, we observed that the sample size, the allelic frequency, and the SNP density in the dataset used to select the tags each have important effects on the tagging performance. Overall, our study supports the use of HapMap data in other Caucasian populations, but the SNP density and the bias towards high-frequency SNPs have to be taken into account when designing association studies.
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Superconducting linear accelerator cryostat
International Nuclear Information System (INIS)
Ben-Zvi, I.; Elkonin, B.V.; Sokolowski, J.S.
1984-01-01
A large vertical cryostat for a superconducting linear accelerator using quarter wave resonators has been developed. The essential technical details, operational experience and performance are described. (author)
The MAPS-based vertex detector for the STAR experiment: Lessons learned and performance
Energy Technology Data Exchange (ETDEWEB)
Contin, Giacomo, E-mail: gcontin@lbl.gov
2016-09-21
The PiXeL detector (PXL) of the STAR experiment at RHIC is the first application of the state-of-the-art thin Monolithic Active Pixel Sensors (MAPS) technology in a collider environment. The PXL, together with the Intermediate Silicon Tracker (IST) and the Silicon Strip Detector (SSD), form the Heavy Flavor Tracker (HFT), which has been designed to improve the vertex resolution and extend the STAR measurement capabilities in the heavy flavor domain, providing a clean probe for studying the Quark–Gluon Plasma. The two PXL layers are placed at a radius of 2.8 and 8 cm from the beam line, respectively, and is based on ultra-thin high resolution MAPS sensors. The sensor features 20.7 μm pixel pitch, 185.6 μs readout time and 170 mW/cm{sup 2} power dissipation. The detector is air-cooled, allowing a global material budget of 0.4% radiation length on the innermost layer. A novel mechanical approach to detector insertion allows for fast installation and integration of the pixel sub detector. The HFT took data in Au+Au collisions at 200 GeV during the 2014 RHIC run. Modified during the RHIC shutdown to improve its reliability, material budget, and tracking capabilities, the HFT took data in p+p and p+Au collisions at √s{sub NN}=200 GeV in the 2015 RHIC run. In this paper we present detector specifications, experience from the construction and operations, and lessons learned. We also show preliminary results from 2014 Au+Au data analyses, demonstrating the capabilities of charm reconstruction with the HFT. - Highlights: • First MAPS-based vertex detector in a collider experiment. • Achieved low material budget of 0.39% of radiation length per detector layer. • Track pointing resolution to the primary vertex better than 10⊕24 GeV/p×c μm. • Gain in significance for the topological reconstruction of the D{sup 0}−>K+π decay in STAR. • Observed latch-up induced damage of MAPS sensors.
A Manual for the Performance of Protective Equipment Fit-Mapping
2009-10-01
Fit Map Derived Accommodation Envelopes Stature Chest Circumference 1 2 3 4 5 A 760-900 1500- 1590 1520-1640 1590 -1710 B 820-960 1500-1620 1520-1640...Right and Left ISO Definition No. N/A CAESAR Name: THELION/BUSTPOINT, RIGHT AND LEFT Description: Most anterior protrusion of the bra cup on women ...Development. pp. 19-59. 17. Robinette, K.M. (1996). Flight suit sizes for women , Armstrong Laboratory, Brooke AFB, MIPR Number 96MM6646. 18
Energy Technology Data Exchange (ETDEWEB)
Miller, Naomi J.; Perrin, Tess E.; Royer, Michael P.; Wilkerson, Andrea M.; Beeson, Tracy A.
2014-05-20
Although lensed troffers are numerous, there are many other types of optical systems as well. This report looked at the performance of three linear (T8) LED lamps chosen primarily based on their luminous intensity distributions (narrow, medium, and wide beam angles) as well as a benchmark fluorescent lamp in five different troffer types. Also included are the results of a subjective evaluation. Results show that linear (T8) LED lamps can improve luminaire efficiency in K12-lensed and parabolic-louvered troffers, effect little change in volumetric and high-performance diffuse-lensed type luminaires, but reduce efficiency in recessed indirect troffers. These changes can be accompanied by visual appearance and visual comfort consequences, especially when LED lamps with clear lenses and narrow distributions are installed. Linear (T8) LED lamps with diffuse apertures exhibited wider beam angles, performed more similarly to fluorescent lamps, and received better ratings from observers. Guidance is provided on which luminaires are the best candidates for retrofitting with linear (T8) LED lamps.
International Nuclear Information System (INIS)
Cordes, Gail Adele; Van Ausdeln, Leo Anthony; Velasquez, Maria Elena
2002-01-01
The report discusses preliminary proof-of-concept research for using the Advanced Data Validation and Verification System (ADVVS), a new INEEL software package, to add validation and verification and multivariate feedback control to the operation of non-destructive analysis (NDA) equipment. The software is based on human cognition, the recognition of patterns and changes in patterns in time-related data. The first project applied ADVVS to monitor operations of a selectable energy linear electron accelerator, and showed how the software recognizes in real time any deviations from the optimal tune of the machine. The second project extended the software method to provide model-based multivariate feedback control for the same linear electron accelerator. The projects successfully demonstrated proof-of-concept for the applications and focused attention on the common application of intelligent information processing techniques
Directory of Open Access Journals (Sweden)
Paulius Palevicius
2014-01-01
Full Text Available Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms.
Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas
2014-01-01
Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467
Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas
2014-01-21
Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms.
Cutler, Michael J; Johnson, Jeremy; Abozguia, Khalid; Rowan, Shane; Lewis, William; Costantini, Otto; Natale, Andrea; Ziv, Ohad
2016-01-01
Fibrosis as a substrate for atrial fibrillation (AF) has been shown in numerous preclinical models. Voltage mapping enables in vivo assessment of scar in the left atrium (LA), which can be targeted with catheter ablation. We hypothesized that using the presence or absence of low voltage to guide ablation beyond pulmonary vein antral isolation (PVAI) will improve atrial arrhythmia (AF/AT)-free survival in persistent AF. Single-center retrospective analysis of 2 AF ablation strategies: (1) standard ablation (SA) versus (2) voltage-guided ablation (VGA). PVAI was performed in both groups. With SA, additional lesions beyond PVAI were performed at the discretion of the operator. With VGA, additional lesions to isolate the LA posterior wall were performed if voltage mapping of this region in sinus rhythm showed scar (LA voltage atrial size. Posterior wall ablation was performed in 57% of patient with SA compared to 42% with VGA. VGA ablation increased 1-year AF-/AT-free survival in patients when compared to SA (80% vs. 57%; P = 0.005). In a multivariate analysis, VGA was the only independent predictor of AF-/AT-free survival (hazard ratio of 0.30; P = 0.002). The presence of LA posterior wall scar may be an important ablation target in persistent AF. A prospective randomized trial is needed to confirm these data. © 2015 Wiley Periodicals, Inc.
Linearity in Process Languages
DEFF Research Database (Denmark)
Nygaard, Mikkel; Winskel, Glynn
2002-01-01
The meaning and mathematical consequences of linearity (managing without a presumed ability to copy) are studied for a path-based model of processes which is also a model of affine-linear logic. This connection yields an affine-linear language for processes, automatically respecting open......-map bisimulation, in which a range of process operations can be expressed. An operational semantics is provided for the tensor fragment of the language. Different ways to make assemblies of processes lead to different choices of exponential, some of which respect bisimulation....
Directory of Open Access Journals (Sweden)
Luis Payá
2014-02-01
Full Text Available Map building and localization are two crucial abilities that autonomous robots must develop. Vision sensors have become a widespread option to solve these problems. When using this kind of sensors, the robot must extract the necessary information from the scenes to build a representation of the environment where it has to move and to estimate its position and orientation with robustness. The techniques based on the global appearance of the scenes constitute one of the possible approaches to extract this information. They consist in representing each scene using only one descriptor which gathers global information from the scene. These techniques present some advantages comparing to other classical descriptors, based on the extraction of local features. However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy. In this paper we make an exhaustive comparison among some global appearance descriptors to solve the mapping and localization problem. With this aim, we make use of several image sets captured in indoor environments under realistic working conditions. The datasets have been collected using an omnidirectional vision sensor mounted on the robot.
International Nuclear Information System (INIS)
Vlahostergios, Z.; Sideridis, A.; Yakinthos, K.; Goulas, A.
2012-01-01
Highlights: ► We model the wake flow produced by a LPT blade using a non-linear turbulence model. ► We use two interpolation schemes for the convection terms with different accuracy. ► We investigate the effect of each term of the non-linear constitutive expression. ► The results are compared with available experimental measurements. ► The model predicts with a good accuracy the velocity and stress distributions. - Abstract: The wake flow produced by a low-pressure turbine blade is modeled using a non-linear eddy-viscosity turbulence model. The theoretical benefit of using a non-linear eddy-viscosity model is strongly related to the capability of resolving highly anisotropic flows in contrast to the linear turbulence models, which are unable to correctly predict anisotropy. The main aim of the present work is to practically assess the performance of the model, by examining its ability to capture the anisotropic behavior of the wake-flow, mainly focusing on the measured velocity and Reynolds-stress distributions and to provide accurate results for the turbulent kinetic energy balance terms. Additionally, the contribution of each term of its non-linear constitutive expression for the Reynolds stresses is also investigated, in order to examine their direct effect on the modeling of the wake flow. The assessment is based on the experimental measurements that have been carried-out by the same group in Thessaloniki, Sideridis et al. (2011). The computational results show that the non-linear eddy viscosity model is capable to predict, with a good accuracy, all the flow and turbulence parameters while it is easy to program it in a computer code thus meeting the expectations of its originators.
Mapping the performance of wood-burning stoves by installations worldwide
DEFF Research Database (Denmark)
Luis Teles de Carvalho, Ricardo; Jensen, Ole Michael; Tarelho, Luis A. C.
2016-01-01
environmental health risk. Research stressed the need to increase the performance of conventional interplays between users, stoves and buildings. This scientific review aims to characterize the performance and environmental effects of 9 wood-burning stove categories by installations worldwide...
What Happens Inside a Fuel Cell? Developing an Experimental Functional Map of Fuel Cell Performance
Brett, Daniel J. L.; Kucernak, Anthony R.; Aguiar, Patricia; Atkins, Stephen C.; Brandon, Nigel P.; Clague, Ralph; Cohen, Lesley F.; Hinds, Gareth; Kalyvas, Christos; Offer, Gregory J.; Ladewig, Bradley; Maher, Robert; Marquis, Andrew; Shearing, Paul; Vasileiadis, Nikos; Vesovic, Velisa
2010-01-01
Fuel cell performance is determined by the complex interplay of mass transport, energy transfer and electrochemical processes. The convolution of these processes leads to spatial heterogeneity in the way that fuel cells perform, particularly due
Wells, R. G.; Gifford, H. C.; Pretorius, P. H.; Famcombe, T. H.; Narayanan, M. V.; King, M. A.
2002-06-01
We have demonstrated an improvement due to attenuation correction (AC) at the task of lesion detection in thoracic SPECT images. However, increased noise in the transmission data due to aging sources or very large patients, and misregistration of the emission and transmission maps, can reduce the accuracy of the AC and may result in a loss of lesion detectability. We investigated the impact of noise in and misregistration of transmission data, on the detection of simulated Ga-67 thoracic lesions. Human-observer localization-receiver-operating-characteristic (LROC) methodology was used to assess performance. Both emission and transmission data were simulated using the MCAT computer phantom. Emission data were reconstructed using OSEM incorporating AC and detector resolution compensation. Clinical noise levels were used in the emission data. The transmission-data noise levels ranged from zero (noise-free) to 32 times the measured clinical levels. Transaxial misregistrations of 0.32, 0.63, and 1.27 cm between emission and transmission data were also examined. Three different algorithms were considered for creating the attenuation maps: filtered backprojection (FBP), unbounded maximum-likelihood (ML), and block-iterative transmission AB (BITAB). Results indicate that a 16-fold increase in the noise was required to eliminate the benefit afforded by AC, when using FBP or ML to reconstruct the attenuation maps. When using BITAB, no significant loss in performance was observed for a 32-fold increase in noise. Misregistration errors are also a concern as even small errors here reduce the performance gains of AC.
Energy Technology Data Exchange (ETDEWEB)
Kim, Hana; Youk, Ji Hyun, E-mail: jhyouk@yuhs.ac; Gweon, Hye Mi; Kim, Jeong-Ah; Son, Eun Ju
2013-08-15
Purpose: To compare the diagnostic performance of qualitative shear-wave elastography (SWE) according to three different color map opacities for breast masses Materials and methods: 101 patients aged 21–77 years with 113 breast masses underwent B-mode US and SWE under three different color map opacities (50%, 19% and 100%) before biopsy or surgery. Following SWE features were reviewed: visual pattern classification (pattern 1–4), color homogeneity (E{sub homo}) and six-point color score of maximum elasticity (E{sub col}). Combined with B-mode US and SWE, the likelihood of malignancy (LOM) was also scored. The area under the curve (AUC) was obtained by ROC curve analysis to assess the diagnostic performance under each color opacity. Results: A visual color pattern, E{sub homo}, E{sub col} and LOM scoring were significantly different between benign and malignant lesions under all color opacities (P < 0.001). For 50% opacity, AUCs of visual color pattern, E{sub col}, E{sub homo} and LOM scoring were 0.902, 0.951, 0.835 and 0.975. But, for each SWE feature, there was no significant difference in the AUC among three different color opacities. For all color opacities, visual color pattern and E{sub col} showed significantly higher AUC than E{sub homo}. In addition, a combined set of B-mode US and SWE showed significantly higher AUC than SWE alone for color patterns, E{sub homo}, but no significant difference was found in E{sub col}. Conclusion: Qualitative SWE was useful to differentiate benign from malignant breast lesion under all color opacities. The difference in color map opacity did not significantly influence diagnostic performance of SWE.
International Nuclear Information System (INIS)
Kim, Hana; Youk, Ji Hyun; Gweon, Hye Mi; Kim, Jeong-Ah; Son, Eun Ju
2013-01-01
Purpose: To compare the diagnostic performance of qualitative shear-wave elastography (SWE) according to three different color map opacities for breast masses Materials and methods: 101 patients aged 21–77 years with 113 breast masses underwent B-mode US and SWE under three different color map opacities (50%, 19% and 100%) before biopsy or surgery. Following SWE features were reviewed: visual pattern classification (pattern 1–4), color homogeneity (E homo ) and six-point color score of maximum elasticity (E col ). Combined with B-mode US and SWE, the likelihood of malignancy (LOM) was also scored. The area under the curve (AUC) was obtained by ROC curve analysis to assess the diagnostic performance under each color opacity. Results: A visual color pattern, E homo , E col and LOM scoring were significantly different between benign and malignant lesions under all color opacities (P < 0.001). For 50% opacity, AUCs of visual color pattern, E col , E homo and LOM scoring were 0.902, 0.951, 0.835 and 0.975. But, for each SWE feature, there was no significant difference in the AUC among three different color opacities. For all color opacities, visual color pattern and E col showed significantly higher AUC than E homo . In addition, a combined set of B-mode US and SWE showed significantly higher AUC than SWE alone for color patterns, E homo , but no significant difference was found in E col . Conclusion: Qualitative SWE was useful to differentiate benign from malignant breast lesion under all color opacities. The difference in color map opacity did not significantly influence diagnostic performance of SWE
Dabbakuti, J. R. K. Kumar; Venkata Ratnam, D.
2017-10-01
Precise modeling of the ionospheric Total Electron Content (TEC) is a critical aspect of Positioning, Navigation, and Timing (PNT) services intended for the Global Navigation Satellite Systems (GNSS) applications as well as Earth Observation System (EOS), satellite communication, and space weather forecasting applications. In this paper, linear time series modeling has been carried out on ionospheric TEC at two different locations at Koneru Lakshmaiah University (KLU), Guntur (geographic 16.44° N, 80.62° E; geomagnetic 7.55° N) and Bangalore (geographic 12.97° N, 77.59° E; geomagnetic 4.53° N) at the northern low-latitude region, for the year 2013 in the 24th solar cycle. The impact of the solar and geomagnetic activity on periodic oscillations of TEC has been investigated. Results confirm that the correlation coefficient of the estimated TEC from the linear model TEC and the observed GPS-TEC is around 93%. Solar activity is the key component that influences ionospheric daily averaged TEC while periodic component reveals the seasonal dependency of TEC. Furthermore, it is observed that the influence of geomagnetic activity component on TEC is different at both the latitudes. The accuracy of the model has been assessed by comparing the International Reference Ionosphere (IRI) 2012 model TEC and TEC measurements. Moreover, the absence of winter anomaly is remarkable, as determined by the Root Mean Square Error (RMSE) between the linear model TEC and GPS-TEC. On the contrary, the IRI2012 model TEC evidently failed to predict the absence of winter anomaly in the Equatorial Ionization Anomaly (EIA) crest region. The outcome of this work will be useful for improving the ionospheric now-casting models under various geophysical conditions.
McMonagle, Gerard
2006-01-01
The CERN CTF3 facility is being used to test and demonstrate key technical issues for the CLIC (Compact Linear Collider) study. Pulsed RF power sources are essential elements in this test facility. Klystrons at S-band (29998.55 GHz), in conjunction with pulse compression systems, are used to power the Drive Beam Accelerator (DBA) to achieve an electron beam energy of 150 MeV. The L-Band RF system, includes broadband Travelling Wave Tubes (TWTs) for beam bunching with 'phase coded' sub pulses ...
Raber, Jacob; Torres, Eileen Ruth S; Akinyeke, Tunde; Lee, Joanne; Weber Boutros, Sydney J; Turker, Mitchell S; Kronenberg, Amy
2018-04-20
The space radiation environment includes helium (⁴He) ions that may impact brain function. As little is known about the effects of exposures to ⁴He ions on the brain, we assessed the behavioral and cognitive performance of C57BL/6J × DBA2/J F1 (B6D2F1) mice three months following irradiation with ⁴He ions (250 MeV/n; linear energy transfer (LET) = 1.6 keV/μm; 0, 21, 42 or 168 cGy). Sham-irradiated mice and mice irradiated with 21 or 168 cGy showed novel object recognition, but mice irradiated with 42 cGy did not. In the passive avoidance test, mice received a slight foot shock in a dark compartment, and latency to re-enter that compartment was assessed 24 h later. Sham-irradiated mice and mice irradiated with 21 or 42 cGy showed a higher latency on Day 2 than Day 1, but the latency to enter the dark compartment in mice irradiated with 168 cGy was comparable on both days. ⁴He ion irradiation, at 42 and 168 cGy, reduced the levels of the dendritic marker microtubule-associated protein-2 (MAP-2) in the cortex. There was an effect of radiation on apolipoprotein E (apoE) levels in the hippocampus and cortex, with higher apoE levels in mice irradiated at 42 cGy than 168 cGy and a trend towards higher apoE levels in mice irradiated at 21 than 168 cGy. In addition, in the hippocampus, there was a trend towards a negative correlation between MAP-2 and apoE levels. While reduced levels of MAP-2 in the cortex might have contributed to the altered performance in the passive avoidance test, it does not seem sufficient to do so. The higher hippocampal and cortical apoE levels in mice irradiated at 42 than 168 cGy might have served as a compensatory protective response preserving their passive avoidance memory. Thus, there were no alterations in behavioral performance in the open filed or depressive-like behavior in the forced swim test, while cognitive impairments were seen in the object recognition and passive avoidance tests, but not in the contextual or cued fear
Directory of Open Access Journals (Sweden)
Jacob Raber
2018-04-01
Full Text Available The space radiation environment includes helium (4He ions that may impact brain function. As little is known about the effects of exposures to 4He ions on the brain, we assessed the behavioral and cognitive performance of C57BL/6J × DBA2/J F1 (B6D2F1 mice three months following irradiation with 4He ions (250 MeV/n; linear energy transfer (LET = 1.6 keV/μm; 0, 21, 42 or 168 cGy. Sham-irradiated mice and mice irradiated with 21 or 168 cGy showed novel object recognition, but mice irradiated with 42 cGy did not. In the passive avoidance test, mice received a slight foot shock in a dark compartment, and latency to re-enter that compartment was assessed 24 h later. Sham-irradiated mice and mice irradiated with 21 or 42 cGy showed a higher latency on Day 2 than Day 1, but the latency to enter the dark compartment in mice irradiated with 168 cGy was comparable on both days. 4He ion irradiation, at 42 and 168 cGy, reduced the levels of the dendritic marker microtubule-associated protein-2 (MAP-2 in the cortex. There was an effect of radiation on apolipoprotein E (apoE levels in the hippocampus and cortex, with higher apoE levels in mice irradiated at 42 cGy than 168 cGy and a trend towards higher apoE levels in mice irradiated at 21 than 168 cGy. In addition, in the hippocampus, there was a trend towards a negative correlation between MAP-2 and apoE levels. While reduced levels of MAP-2 in the cortex might have contributed to the altered performance in the passive avoidance test, it does not seem sufficient to do so. The higher hippocampal and cortical apoE levels in mice irradiated at 42 than 168 cGy might have served as a compensatory protective response preserving their passive avoidance memory. Thus, there were no alterations in behavioral performance in the open filed or depressive-like behavior in the forced swim test, while cognitive impairments were seen in the object recognition and passive avoidance tests, but not in the contextual or cued
DEFF Research Database (Denmark)
Salovaara-Moring, Inka
2016-01-01
practice. In particular, mapping environmental damage, endangered species, and human-made disasters has become one focal point for environmental knowledge production. This type of digital map has been highlighted as a processual turn in critical cartography, whereas in related computational journalism...... of a geo-visualization within information mapping that enhances embodiment in the experience of the information. InfoAmazonia is defined as a digitally created map-space within which journalistic practice can be seen as dynamic, performative interactions between journalists, ecosystems, space, and species...
Cox, Anicca
2015-01-01
Via interview data focused on instructor practices and values, this study sought to describe some of what performing and visual arts instructors do at the university level to effectively teach disciplinary values through writing. The study's research goals explored how relationships to writing process in visual and performing arts support…
International Nuclear Information System (INIS)
Bhat, M.A.; Abbas, S.M.; Kaul, S.K.; Khan, F.A.; Ashraf Teli, M.
1998-01-01
In 1984, a linear accelerator, Mevatron-74 was installed in the Dept. of Radiation Oncology. Discussed in this paper is our experience with this costly machine which has not been cost effective at all with regard to the purpose it was purchased for. This will lay a guideline for the developing centres of our country to make correct decision as regards the purchase of radiotherapy equipment keeping in view their performance
Jifeng Gu; Weijun Wu; Mengwei Huang; Fen Long; Xinhua Liu; Yizhun Zhu
2018-01-01
A method for high-performance liquid chromatography coupled with linear ion trap quadrupole Orbitrap high-resolution mass spectrometry (HPLC-LTQ-Orbitrap MS) was developed and validated for the qualitative and quantitative assessment of Shejin-liyan Granule. According to the fragmentation mechanism and high-resolution MS data, 54 compounds, including fourteen isoflavones, eleven ligands, eight flavonoids, six physalins, six organic acids, four triterpenoid saponins, two xanthones, two alkaloi...
Directory of Open Access Journals (Sweden)
Khaled Thabet Awad
2016-07-01
Full Text Available This study aims to identify the effect of using digital mind maps to on the cognitive achievement and the performance level of some basic skills in handball. Research population includes the first-year students at the Faculty of Physical Education in Port Said consisting of 200 students. Research Sample both researchers randomly selected the sample of first year students. The total sample size reaches 180 students with a 90.00%, after excluding failed students, re-registered students, the students of other levels of curriculum, practitioners to previous experiences and irregular students. The total number was 20 students with a percentage of (10.00%. They were divided into: Basic Sample: includes 80 students with a 44.44%. They were divided into two equal groups of 40 students. First Exploratory Sample: includes 60 students from the same research population and from outside the basic sample in order to find Tests Validity of the tests with a 33.33%. Second Exploratory Sample: includes 40 students from the same research population and from outside the basic sample in order to find Tests Reliability of the tests and identify the extent of pilot program appropriateness for the sample under discussion with a 22.22%. The first-year students were selected, according to the study plan, which contains a handball curriculum for the students of this educational level. Statistical Treatments: Both researchers conducted data statistically processes, using a statistical package for Social Sciences, SPSS ver. 20.0, in order to identify: arithmetic mean, standard deviation, median, skewness coefficient, correlation coefficient, discriminant validity coefficient, "t" test per one group, "t" test per two groups. The use of mind maps has a positive effect better than (explanation and model method on the cognitive achievement and the performance level of some basic skills in handball. Active learning techniques, such as the method of digital mind maps in teaching
Cattinelli, Isabella; Bolzoni, Elena; Barbieri, Carlo; Mari, Flavio; Martin-Guerrero, José David; Soria-Olivas, Emilio; Martinez-Martinez, José Maria; Gomez-Sanchis, Juan; Amato, Claudia; Stopper, Andrea; Gatti, Emanuele
2012-03-01
The Balanced Scorecard (BSC) is a validated tool to monitor enterprise performances against specific objectives. Through the choice and the evaluation of strategic Key Performance Indicators (KPIs), it provides a measure of the past company's outcome and allows planning future managerial strategies. The Fresenius Medical Care (FME) BSC makes use of 30 KPIs for a continuous quality improvement strategy within its dialysis clinics. Each KPI is monthly associated to a score that summarizes the clinic efficiency for that month. Standard statistical methods are currently used to analyze the BSC data and to give a comprehensive view of the corporate improvements to the top management. We herein propose the Self-Organizing Maps (SOMs) as an innovative approach to extrapolate information from the FME BSC data and to present it in an easy-readable informative form. A SOM is a computational technique that allows projecting high-dimensional datasets to a two-dimensional space (map), thus providing a compressed representation. The SOM unsupervised (self-organizing) training procedure results in a map that preserves similarity relations existing in the original dataset; in this way, the information contained in the high-dimensional space can be more easily visualized and understood. The present work demonstrates the effectiveness of the SOM approach in extracting useful information from the 30-dimensional BSC dataset: indeed, SOMs enabled both to highlight expected relationships between the KPIs and to uncover results not predictable with traditional analyses. Hence we suggest SOMs as a reliable complementary approach to the standard methods for BSC interpretation.
Guglieri-López, Beatriz; Pérez-Pitarch, Alejandro; Martinez-Gómez, Maria Amparo; Porta-Oltra, Begoña; Climente-Martí, Mónica; Merino-Sanjuán, Matilde
2016-12-01
A wide linearity range analytical method for the determination of lenalidomide in patients with multiple myeloma for pharmacokinetic studies is required. Plasma samples were ultrasonicated for protein precipitation. A solid-phase extraction was performed. The eluted samples were evaporated to dryness under vacuum, and the solid obtained was diluted and injected into the high-performance liquid chromatography (HPLC) system. Separation of lenalidomide was performed on an Xterra RP C18 (250 mm length × 4.6 mm i.d., 5 µm) using a mobile phase consisting of phosphate buffer/acetonitrile (85:15, v/v, pH 3.2) at a flow rate of 0.5 mL · min -1 The samples were monitored at a wavelength of 311 nm. A linear relationship with good correlation coefficient (r = 0.997, n = 9) was found between the peak area and lenalidomide concentrations in the range of 100 to 950 ng · mL -1 The limits of detection and quantitation were 28 and 100 ng · mL -1 , respectively. The intra- and interassay precisions were satisfactory, and the accuracy of the method was proved. In conclusion, the proposed method is suitable for the accurate quantification of lenalidomide in human plasma with a wide linear range, from 100 to 950 ng · mL -1 This is a valuable method for pharmacokinetic studies of lenalidomide in human subjects. © 2016 Society for Laboratory Automation and Screening.
International Nuclear Information System (INIS)
Di Prinzio, R.
1984-01-01
In this work a different system is proposed which is able to verify the absorbed depth dose given at two different depths, the irradiation field homogeneity and its coincidence with the light field of the machine, the source to surface distance used and the beam nominal energy. These radiation field parameters are very important in the tumour treatment and they may help in the determination of the error sources of the absorbed depth dose. The system developed uses a water phantom, LiF thermoluminescent dosemeters and a radiographic film to evaluate such parameters. The postal system developed in this work has been tested in linear accelerators of 4 to 18 MV with good results. (author)
Sagar, Rizwan Ur Rehman; Galluzzi, Massimiliano; Wan, Caihua; Shehzad, Khurram; Navale, Sachin T; Anwar, Tauseef; Mane, Rajaram S; Piao, Hong-Guang; Ali, Abid; Stadler, Florian J
2017-01-18
Here, we present the first observation of magneto-transport properties of graphene foam (GF) composed of a few layers in a wide temperature range of 2-300 K. Large room-temperature linear positive magnetoresistance (PMR ≈ 171% at B ≈ 9 T) has been detected. The largest PMR (∼213%) has been achieved at 2 K under a magnetic field of 9 T, which can be tuned by the addition of poly(methyl methacrylate) to the porous structure of the foam. This remarkable magnetoresistance may be the result of quadratic magnetoresistance. The excellent magneto-transport properties of GF open a way toward three-dimensional graphene-based magnetoelectronic devices.
An airborne interferometric SAR system for high-performance 3D mapping
Lange, Martin; Gill, Paul
2009-05-01
With a vertical accuracy better than 1 m and collection rates up to 7000 km2/h, airborne interferometric synthetic aperture radars (InSAR) bridge the gap between space borne radar sensors and airborne optical LIDARs. This paper presents the latest generation of X-band InSAR sensors, developed by Intermap TechnologiesTM, which are operated on our four aircrafts. The sensors collect data for the NEXTMap(R) program - a digital elevation model (DEM) with 1 m vertical accuracy for the contiguous U.S., Hawaii, and most of Western Europe. For a successful operation, challenges like reduction of multipath reflections, very high interferometric phase stability, and a precise system calibration had to be mastered. Recent advances in sensor design, comprehensive system automation and diagnostics have increased the sensor reliability to a level where no radar operator is required onboard. Advanced flight planning significantly improved aircraft utilization and acquisition throughput, while reducing operational costs. Highly efficient data acquisition with straight flight lines up to 1200 km is daily routine meanwhile. The collected data pass though our automated processing cluster and finally are edited to our terrain model products. Extensive and rigorous quality control at every step of the workflow are key to maintain stable vertical accuracies of 1 m and horizontal accuracies of 2 m for our 3D maps. The combination of technical and operational advances presented in this paper enabled Intermap to survey two continents, producing 11 million km2 of uniform and accurate 3D terrain data.
Energy Technology Data Exchange (ETDEWEB)
Bohnen, Sebastian, E-mail: s.bohnen@uke.de [University Medical Center Hamburg-Eppendorf, University Heart Center, General and Interventional Cardiology, Hamburg (Germany); Radunski, Ulf K., E-mail: u.radunski@uke.de [University Medical Center Hamburg-Eppendorf, University Heart Center, General and Interventional Cardiology, Hamburg (Germany); Lund, Gunnar K., E-mail: glund@uke.de [University Medical Center Hamburg-Eppendorf, Department of Diagnostic and Interventional Radiology, Hamburg (Germany); Tahir, Enver, E-mail: e.tahir@uke.de [University Medical Center Hamburg-Eppendorf, Department of Diagnostic and Interventional Radiology, Hamburg (Germany); Avanesov, Maxim, E-mail: m.avanesov@uke.de [University Medical Center Hamburg-Eppendorf, Department of Diagnostic and Interventional Radiology, Hamburg (Germany); Stehning, Christian, E-mail: christian.stehning@philips.com [Philips Research, Hamburg (Germany); Schnackenburg, Bernhard, E-mail: bernhard.schnackenburg@philips.com [Philips Healthcare Germany, Hamburg (Germany); Adam, Gerhard, E-mail: g.adam@uke.de [University Medical Center Hamburg-Eppendorf, Department of Diagnostic and Interventional Radiology, Hamburg (Germany); Blankenberg, Stefan, E-mail: s.blankenberg@uke.de [University Medical Center Hamburg-Eppendorf, University Heart Center, General and Interventional Cardiology, Hamburg (Germany); Muellerleile, Kai, E-mail: kamuellerleile@uke.de [University Medical Center Hamburg-Eppendorf, University Heart Center, General and Interventional Cardiology, Hamburg (Germany)
2017-01-15
Background: T1 mapping is a promising diagnostic tool to improve the diagnostic accuracy of cardiovascular magnetic resonance (CMR) in patients with suspected myocarditis. However, there are currently no data on the potential influence of slice orientation on the diagnostic performance of CMR. Thus, we compared the diagnostic performance of global myocardial T1 and extracellular volume (ECV) values to differentiate patients with myocarditis from healthy individuals between different slice orientations. Methods: This study included 48 patients with clinically defined myocarditis and 13 healthy controls who underwent CMR at 1.5 T. A modified Look-Locker inversion-recovery (MOLLI) sequence was used for T1 mapping before and 15 min after administration of 0.075 mmol/kg Gadolinium-BOPTA. T1 mapping was performed on three short and on three long axes slices, respectively. Native T1, post-contrast T1 and extracellular volume (ECV) −BOPTA maps were calculated using a dedicated plug-in written for the OsiriX software and compared between the mean value of three short-axes slices (3SAX), the central short-axis (1SAX), the mean value of three long-axes slices (3LAX), the four-chamber view (4CH), the three-chamber view (3CH) and the two-chamber view (2CH). Results: There were significantly lower native T1 values on 3LAX (1081 ms (1037–1131 ms)) compared to 3SAX (1107 ms (1069–1143 ms), p = 0.0022) in patients with myocarditis, but not in controls (1026 ms (1009–1059 ms) vs. 1039 ms (1023–1055 ms), p = 0.2719). The areas under the curve (AUC) to discriminate between myocarditis and healthy controls by native myocardial T1 were 0.85 (p < 0.0001) on 3SAX, 0.85 (p < 0.0001) on 1SAX, 0.76 (p = 0.0002) on 3LAX, 0.70 (p = 0.0075) on 4CH, 0.72 (p = 0.0020) on 3CH and 0.75 (p = 0.0003) on 2CH. The AUCs for ECV-BOPTA were 0.83 (p < 0.0001) on 3 SAX, 0.82 (p < 0.0001) on 1SAX, 0.77 (p = 0.0005) on 3LAX, 0.71 (p = 0.0079) on 4CH, 0.69 (p = 0.0371) on 3CH and 0.75 (p = 0.0006) on
International Nuclear Information System (INIS)
Bohnen, Sebastian; Radunski, Ulf K.; Lund, Gunnar K.; Tahir, Enver; Avanesov, Maxim; Stehning, Christian; Schnackenburg, Bernhard; Adam, Gerhard; Blankenberg, Stefan; Muellerleile, Kai
2017-01-01
Background: T1 mapping is a promising diagnostic tool to improve the diagnostic accuracy of cardiovascular magnetic resonance (CMR) in patients with suspected myocarditis. However, there are currently no data on the potential influence of slice orientation on the diagnostic performance of CMR. Thus, we compared the diagnostic performance of global myocardial T1 and extracellular volume (ECV) values to differentiate patients with myocarditis from healthy individuals between different slice orientations. Methods: This study included 48 patients with clinically defined myocarditis and 13 healthy controls who underwent CMR at 1.5 T. A modified Look-Locker inversion-recovery (MOLLI) sequence was used for T1 mapping before and 15 min after administration of 0.075 mmol/kg Gadolinium-BOPTA. T1 mapping was performed on three short and on three long axes slices, respectively. Native T1, post-contrast T1 and extracellular volume (ECV) −BOPTA maps were calculated using a dedicated plug-in written for the OsiriX software and compared between the mean value of three short-axes slices (3SAX), the central short-axis (1SAX), the mean value of three long-axes slices (3LAX), the four-chamber view (4CH), the three-chamber view (3CH) and the two-chamber view (2CH). Results: There were significantly lower native T1 values on 3LAX (1081 ms (1037–1131 ms)) compared to 3SAX (1107 ms (1069–1143 ms), p = 0.0022) in patients with myocarditis, but not in controls (1026 ms (1009–1059 ms) vs. 1039 ms (1023–1055 ms), p = 0.2719). The areas under the curve (AUC) to discriminate between myocarditis and healthy controls by native myocardial T1 were 0.85 (p < 0.0001) on 3SAX, 0.85 (p < 0.0001) on 1SAX, 0.76 (p = 0.0002) on 3LAX, 0.70 (p = 0.0075) on 4CH, 0.72 (p = 0.0020) on 3CH and 0.75 (p = 0.0003) on 2CH. The AUCs for ECV-BOPTA were 0.83 (p < 0.0001) on 3 SAX, 0.82 (p < 0.0001) on 1SAX, 0.77 (p = 0.0005) on 3LAX, 0.71 (p = 0.0079) on 4CH, 0.69 (p = 0.0371) on 3CH and 0.75 (p = 0.0006) on
International Nuclear Information System (INIS)
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
Performance analysis of an all-optical OFDM system in presence of non-linear phase noise.
Hmood, Jassim K; Harun, Sulaiman W; Emami, Siamak D; Khodaei, Amin; Noordin, Kamarul A; Ahmad, Harith; Shalaby, Hossam M H
2015-02-23
The potential for higher spectral efficiency has increased the interest in all-optical orthogonal frequency division multiplexing (OFDM) systems. However, the sensitivity of all-optical OFDM to fiber non-linearity, which causes nonlinear phase noise, is still a major concern. In this paper, an analytical model for estimating the phase noise due to self-phase modulation (SPM), cross-phase modulation (XPM), and four-wave mixing (FWM) in an all-optical OFDM system is presented. The phase noise versus power, distance, and number of subcarriers is evaluated by implementing the mathematical model using Matlab. In order to verify the results, an all-optical OFDM system, that uses coupler-based inverse fast Fourier transform/fast Fourier transform without any nonlinear compensation, is demonstrated by numerical simulation. The system employs 29 subcarriers; each subcarrier is modulated by a 4-QAM or 16-QAM format with a symbol rate of 25 Gsymbol/s. The results indicate that the phase variance due to FWM is dominant over those induced by either SPM or XPM. It is also shown that the minimum phase noise occurs at -3 dBm and -1 dBm for 4-QAM and 16-QAM, respectively. Finally, the error vector magnitude (EVM) versus subcarrier power and symbol rate is quantified using both simulation and the analytical model. It turns out that both EVM results are in good agreement with each other.
International Nuclear Information System (INIS)
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Liou, Yi-Hwa; Daly, Alan J.; Canrinus, Esther T.; Forbes, Cheryl A.; Moolenaar, Nienke M.; Cornelissen, Frank; Van Lare, Michelle; Hsiao, Joyce
2017-01-01
This exploratory study foregrounds the important, but often understudied social side of pre-service teacher development and its relation to teaching performance in one university-based teacher preparation program in the US. We examine the extent to which pre-service elementary teachers' social relationships and perceptions of peer trust and…
Liou, Yi Hwa; Daly, Alan J.; Canrinus, Esther T.; Forbes, Cheryl A.; Moolenaar, Nienke M.|info:eu-repo/dai/nl/304352802; Cornelissen, Frank; Van Lare, Michelle; Hsiao, Joyce
2017-01-01
This exploratory study foregrounds the important, but often understudied social side of pre-service teacher development and its relation to teaching performance in one university-based teacher preparation program in the US. We examine the extent to which pre-service elementary teachers’ social
Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil
2014-03-01
Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI.
Doll, William E.; Bell, David T.; Gamey, T. Jeffrey; Beard, Les P.; Sheehan, Jacob R.; Norton, Jeannemarie
2010-04-01
Over the past decade, notable progress has been made in the performance of airborne geophysical systems for mapping and detection of unexploded ordnance in terrestrial and shallow marine environments. For magnetometer systems, the most significant improvements include development of denser magnetometer arrays and vertical gradiometer configurations. In prototype analyses and recent Environmental Security Technology Certification Program (ESTCP) assessments using new production systems the greatest sensitivity has been achieved with a vertical gradiometer configuration, despite model-based survey design results which suggest that dense total-field arrays would be superior. As effective as magnetometer systems have proven to be at many sites, they are inadequate at sites where basalts and other ferrous geologic formations or soils produce anomalies that approach or exceed those of target ordnance items. Additionally, magnetometer systems are ineffective where detection of non-ferrous ordnance items is of primary concern. Recent completion of the Battelle TEM-8 airborne time-domain electromagnetic system represents the culmination of nearly nine years of assessment and development of airborne electromagnetic systems for UXO mapping and detection. A recent ESTCP demonstration of this system in New Mexico showed that it was able to detect 99% of blind-seeded ordnance items, 81mm and larger, and that it could be used to map in detail a bombing target on a basalt flow where previous airborne magnetometer surveys had failed. The probability of detection for the TEM-8 in the blind-seeded study area was better than that reported for a dense-array total-field magnetometer demonstration of the same blind-seeded site, and the TEM-8 system successfully detected these items with less than half as many anomaly picks as the dense-array total-field magnetometer system.
Li, Zhaoyong; Wang, Fengmei; Niu, Zengyuan; Luo, Xin; Zhang, Gang; Chen, Junhui
2014-05-01
A method of ultra high performance liquid chromatography-linear ion trap/orbitrap high resolution mass spectrometry (UPLC-LTQ/Orbitrap MS) was established to screen and confirm 24 hormones in cosmetics. Various cosmetic samples were extracted with methanol. The extract was loaded onto a Waters ACQUITY UPLC BEH C18 column (50 mm x 2.1 mm, 1.7 microm) using a gradient elution of acetonitrile/water containing 0.1% (v/v) formic acid for the separation. The accurate mass of quasi-molecular ion was acquired by full scanning of electrostatic field orbitrap. The rapid screening was carried out by the accurate mass of quasi-molecular ion. The confirmation analysis for targeted compounds was performed with the retention time and qualitative fragments obtained by data dependent scan mode. Under the optimal conditions, the 24 hormones were routinely detected with mass accuracy error below 3 x 10(-6) (3 ppm), and good linearities were obtained in their respective linear ranges with correlation coefficients higher than 0.99. The LODs (S/N = 3) of the 24 compounds were hormones in 50 cosmetic samples. The results demonstrate that the method is a useful tool for the rapid screening and identification of the hormones in cosmetics.
Wu, Yungen; Wang, Zhihui; Liang, Mao; Cheng, Hua; Li, Mengyuan; Liu, Liyuan; Wang, Baiyue; Wu, Jinhua; Prasad Ghimire, Raju; Wang, Xuda; Sun, Zhe; Xue, Song; Qiao, Qiquan
2018-05-18
The core plays a crucial role in achieving high performance of linear hole transport materials (HTMs) toward the perovskite solar cells (PSCs). Most studies focused on the development of fused heterocycles as cores for HTMs. Nevertheless, nonfused heterocycles deserve to be studied since they can be easily synthesized. In this work, we reported a series of low-cost triphenylamine HTMs (M101-M106) with different nonfused cores. Results concluded that the introduced core has a significant influence on conductivity, hole mobility, energy level, and solubility of linear HTMs. M103 and M104 with nonfused oligothiophene cores are superior to other HTMs in terms of conductivity, hole mobility, and surface morphology. PSCs based on M104 exhibited the highest power conversion efficiency of 16.50% under AM 1.5 sun, which is comparable to that of spiro-OMeTAD (16.67%) under the same conditions. Importantly, the employment of M104 is highly economical in terms of the cost of synthesis as compared to that of spiro-OMeTAD. This work demonstrated that nonfused heterocycles, such as oligothiophene, are promising cores for high performance of linear HTMs toward PSCs.
Xu, P. Q.; Rault, D. F.; Pawson, S.; Wargan, K.; Bhartia, P. K.
2012-01-01
The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) was launched on board of the Soumi NPP space platform in late October 2011. It provides ozone-profiling capability with high-vertical resolution from 60 Ian to cloud top. In this study, an end-to-end Observing System Simulation Experiment (OSSE) of OMPS/LP ozone is discussed. The OSSE was developed at NASA's Global Modeling and Assimilation Office (GMAO) using the Goddard Earth Observing System (GEOS-5) data assimilation system. The "truth" for this OSSE is built by assimilating MLS profiles and OMI ozone columns, which is known to produce realistic three-dimensional ozone fields in the stratosphere and upper troposphere. OMPS/LP radiances were computed at tangent points computed by an appropriate orbital model. The OMPS/LP forward RT model, Instrument Models (IMs) and EDR retrieval model were introduced and pseudo-observations derived. The resultant synthetic OMPS/LP observations were evaluated against the "truth" and subsequently these observations were assimilated into GEOS-5. Comparison of this assimilated dataset with the "truth" enables comparisons of the likely uncertainties in 3-D analyses of OMPS/LP data. This study demonstrated the assimilation capabilities of OMPS/LP ozone in GEOS-5, with the monthly, zonal mean (O-A) smaller than 0.02ppmv at all levels, the nns(O-A) close to O.lppmv from 100hPa to 0.2hPa; and the mean(O-B) around the 0.02ppmv for all levels. The monthly zonal mean analysis generally agrees to within 2% of the truth, with larger differences of 2-4% (0.1-0.2ppmv) around 10hPa close to North Pole and in the tropical tropopause region, where the difference is above 20% due to the very low ozone concentrations. These OSSEs demonstrated that, within a single data assimilation system and the assumption that assimilated MLS observations provide a true rendition of the stratosphere, the OMPS/LP ozone data are likely to produce accurate analyses through much of the stratosphere
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Polymorphism screening and mapping of nine meat performance-related genes in the pig
Czech Academy of Sciences Publication Activity Database
Horák, Pavel; Stratil, Antonín; Svatoňová, Martina; Maštálková, Lucie; Patáková, Jitka; Van Poucke, M.; Bartenschlager, H.; Peelman, L. J.; Geldermann, H.
2010-01-01
Roč. 41, č. 3 (2010), s. 334-335 ISSN 0268-9146 R&D Projects: GA AV ČR KJB500450801; GA ČR GA523/09/0844; GA ČR(CZ) GA523/06/1302 Institutional research plan: CEZ:AV0Z50450515 Keywords : genomics * meat performance-related genes * pig Subject RIV: GI - Animal Husbandry ; Breeding Impact factor: 2.203, year: 2010
Polymorphism screening and mapping of nine meat performance-related genes in the pig
Czech Academy of Sciences Publication Activity Database
Horák, Pavel; Stratil, Antonín; Svatoňová, Martina; Maštálková, Lucie; Patáková, Jitka; Van Poucke, M.; Bartenschlager, H.; Peelman, L. J.; Geldermann, H.
2010-01-01
Roč. 41, č. 3 (2010), s. 334-335 ISSN 0268-9146 R&D Projects: GA AV ČR KJB500450801; GA ČR GA523/09/0844; GA ČR(CZ) GA523/06/1302 Institutional research plan: CEZ:AV0Z50450515 Keywords : genomics * meat performance -related genes * pig Subject RIV: GI - Animal Husbandry ; Breeding Impact factor: 2.203, year: 2010
Energy Technology Data Exchange (ETDEWEB)
Egbe, Daniel A.M.; Adam, Getachew; Pivrikas, Almantas; Ulbricht, Christoph; Ramil, Alberto M.; Sariciftci, Niyazi Serdar [Johannes Kepler Univ., Linz (AT). Linz Inst. for Organic Solar Cells (LIOS); Hoppe, Harald [Technische Univ. Ilmenau (Germany). Inst. of Physics and Inst. of Micro- and Nanotechnologies; Rathgeber, Silke [Mainz Univ. (Germany). Inst. of Physics
2010-07-01
The random distribution of segments of linear octyloxy side chains and of branched 2-ethylhexyloxy side chains, on the backbone of anthracene containing poly(p-phenylene-ethynylene)-alt-poly(p-phenylene-vinylene) (PPE-PPV) has resulted in a side chain based statistical copolymer, denoted AnE-PVstat, showing optimized features as compared to the well defined homologues AnE-PVaa, -ab, -ba and -bb, whose constitutional units are incorporated into its backbone. WAXS studies on AnE-P's demonstrate the highest degree of order at the self-assembly state of AnE-PVstat, which is confirmed by its highly structured thin film absorption band. Electric field independent charge carrier mobility ({mu}{sub hole}) for AnE-PVstat was demonstrated by CELIV and OFET measurements, both methods resulting in similar {mu}{sub hole} values of up to 5.43 x 10{sup -4} cm{sup 2}/Vs. Upon comparison, our results show that charge carrier mobility as measured by CELIV technique is predominantly an intrachain process and less an interchain one, which is in line with past photoconductivity results from PPE-PPV based materials. The present side chain distribution favors efficient solar cell active layer phase separation. As a result, a smaller amount of PC{sub 60}BM is needed to achieve relatively high energy conversion efficiencies above 3 %. The efficiency of {eta}{sub AM1.5} {approx} 3.8 % obtained for AnE-PVstat:PC{sub 60}BM blend is presently the state-of-art value for PPV-based materials. (orig.)
Khan, Wasi Z.; Al Zubaidy, Sarim
2017-01-01
The variance in students' academic performance in a civilian institute and in a military technological institute could be linked to the environment of the competition available to the students. The magnitude of talent, domain of skills and volume of efforts students put are identical in both type of institutes. The significant factor is the…
Directory of Open Access Journals (Sweden)
Jixiang Fan
2015-09-01
Full Text Available In this paper, a map-based optimal energy management strategy is proposed to improve the consumption economy of a plug-in parallel hybrid electric vehicle. In the design of the maps, which provide both the torque split between engine and motor and the gear shift, not only the current vehicle speed and power demand, but also the optimality based on the predicted trajectory of vehicle dynamics are considered. To seek the optimality, the equivalent consumption, which trades off the fuel and electricity usages, is chosen as the cost function. Moreover, in order to decrease the model errors in the process of optimization conducted in the discrete time domain, the variational integrator is employed to calculate the evolution of the vehicle dynamics. To evaluate the proposed energy management strategy, the simulation results performed on a professional GT-Suit simulator are demonstrated and the comparison to a real-time optimization method is also given to show the advantage of the proposed off-line optimization approach.
Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng
2009-11-01
The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.
Guo, Mengchao; Zhou, Kan; Wang, Xiaokun; Zhuang, Haiyan; Tang, Dongming; Zhang, Baoshan; Yang, Yi
2018-04-01
In this paper, the impact of coupling between unit cells on the performance of linear-to-circular polarization conversion metamaterial with half transmission and half reflection is analyzed by changing the distance between the unit cells. An equivalent electrical circuit model is then built to explain it based on the analysis. The simulated results show that, when the distance between the unit cells is 23 mm, this metamaterial converts half of the incident linearly-polarized wave into reflected left-hand circularly-polarized wave and converts the other half of it into transmitted left-hand circularly-polarized wave at 4.4 GHz; when the distance is 28 mm, this metamaterial reflects all of the incident linearly-polarized wave at 4.4 GHz; and when the distance is 32 mm, this metamaterial converts half of the incident linearly-polarized wave into reflected right-hand circularly-polarized wave and converts the other half of it into transmitted right-hand circularly-polarized wave at 4.4 GHz. The tunability is realized successfully. The analysis shows that the changes of coupling between unit cells lead to the changes of performance of this metamaterial. The coupling between the unit cells is then considered when building the equivalent electrical circuit model. The built equivalent electrical circuit model can be used to perfectly explain the simulated results, which confirms the validity of it. It can also give help to the design of tunable polarization conversion metamaterials.
Azadi, Sama; Karimi-Jashni, Ayoub
2016-02-01
Predicting the mass of solid waste generation plays an important role in integrated solid waste management plans. In this study, the performance of two predictive models, Artificial Neural Network (ANN) and Multiple Linear Regression (MLR) was verified to predict mean Seasonal Municipal Solid Waste Generation (SMSWG) rate. The accuracy of the proposed models is illustrated through a case study of 20 cities located in Fars Province, Iran. Four performance measures, MAE, MAPE, RMSE and R were used to evaluate the performance of these models. The MLR, as a conventional model, showed poor prediction performance. On the other hand, the results indicated that the ANN model, as a non-linear model, has a higher predictive accuracy when it comes to prediction of the mean SMSWG rate. As a result, in order to develop a more cost-effective strategy for waste management in the future, the ANN model could be used to predict the mean SMSWG rate. Copyright © 2015 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com [Environmental Chemistry Division, Indian Institute of Toxicology Research (Council of Scientific and Industrial Research), Post Box No. 80, MG Marg, Lucknow-226 002, UP (India); Basant, Nikita [School of Graduate Studies-Multiscale Modeling, Computational Simulations and Characterization in Material and Life Sciences, University of Modena and Reggio E., Modena (Italy); Malik, Amrita; Jain, Gunja [Environmental Chemistry Division, Indian Institute of Toxicology Research (Council of Scientific and Industrial Research), Post Box No. 80, MG Marg, Lucknow-226 002, UP (India)
2010-01-18
The paper describes linear and nonlinear modeling of the wastewater data for the performance evaluation of an up-flow anaerobic sludge blanket (UASB) reactor based wastewater treatment plant (WWTP). Partial least squares regression (PLSR), multivariate polynomial regression (MPR) and artificial neural networks (ANNs) modeling methods were applied to predict the levels of biochemical oxygen demand (BOD) and chemical oxygen demand (COD) in the UASB reactor effluents using four input variables measured weekly in the influent wastewater during the peak (morning and evening) and non-peak (noon) hours over a period of 48 weeks. The performance of the models was assessed through the root mean squared error (RMSE), relative error of prediction in percentage (REP), the bias, the standard error of prediction (SEP), the coefficient of determination (R{sup 2}), the Nash-Sutcliffe coefficient of efficiency (E{sub f}), and the accuracy factor (A{sub f}), computed from the measured and model predicted values of the dependent variables (BOD, COD) in the WWTP effluents. Goodness of the model fit to the data was also evaluated through the relationship between the residuals and the model predicted values of BOD and COD. Although, the model predicted values of BOD and COD by all the three modeling approaches (PLSR, MPR, ANN) were in good agreement with their respective measured values in the WWTP effluents, the nonlinear models (MPR, ANNs) performed relatively better than the linear ones. These models can be used as a tool for the performance evaluation of the WWTPs.
International Nuclear Information System (INIS)
Singh, Kunwar P.; Basant, Nikita; Malik, Amrita; Jain, Gunja
2010-01-01
The paper describes linear and nonlinear modeling of the wastewater data for the performance evaluation of an up-flow anaerobic sludge blanket (UASB) reactor based wastewater treatment plant (WWTP). Partial least squares regression (PLSR), multivariate polynomial regression (MPR) and artificial neural networks (ANNs) modeling methods were applied to predict the levels of biochemical oxygen demand (BOD) and chemical oxygen demand (COD) in the UASB reactor effluents using four input variables measured weekly in the influent wastewater during the peak (morning and evening) and non-peak (noon) hours over a period of 48 weeks. The performance of the models was assessed through the root mean squared error (RMSE), relative error of prediction in percentage (REP), the bias, the standard error of prediction (SEP), the coefficient of determination (R 2 ), the Nash-Sutcliffe coefficient of efficiency (E f ), and the accuracy factor (A f ), computed from the measured and model predicted values of the dependent variables (BOD, COD) in the WWTP effluents. Goodness of the model fit to the data was also evaluated through the relationship between the residuals and the model predicted values of BOD and COD. Although, the model predicted values of BOD and COD by all the three modeling approaches (PLSR, MPR, ANN) were in good agreement with their respective measured values in the WWTP effluents, the nonlinear models (MPR, ANNs) performed relatively better than the linear ones. These models can be used as a tool for the performance evaluation of the WWTPs.
Blyth, T S
2002-01-01
Most of the introductory courses on linear algebra develop the basic theory of finite dimensional vector spaces, and in so doing relate the notion of a linear mapping to that of a matrix. Generally speaking, such courses culminate in the diagonalisation of certain matrices and the application of this process to various situations. Such is the case, for example, in our previous SUMS volume Basic Linear Algebra. The present text is a continuation of that volume, and has the objective of introducing the reader to more advanced properties of vector spaces and linear mappings, and consequently of matrices. For readers who are not familiar with the contents of Basic Linear Algebra we provide an introductory chapter that consists of a compact summary of the prerequisites for the present volume. In order to consolidate the student's understanding we have included a large num ber of illustrative and worked examples, as well as many exercises that are strategi cally placed throughout the text. Solutions to the ex...
International Nuclear Information System (INIS)
Horodecki, Pawel
2003-01-01
Possibility of some nonlinear-like operations in quantum mechanics are studied. Some general formula for real linear maps are derived. With the results we show how to perform physically separability tests based on any linear contraction (on product states) that either is real or Hermitian. We also show how to estimate either product or linear combinations of quantum states without knowledge about the states themselves. This can be viewed as a sort of quantum computing on quantum states algebra
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Alessandri, Elena; Williamson, Victoria J; Eiholzer, Hubert; Williamon, Aaron
2015-01-01
Critical reviews offer rich data that can be used to investigate how musical experiences are conceptualized by expert listeners. However, these data also present significant challenges in terms of organization, analysis, and interpretation. This study presents a new systematic method for examining written responses to music, tested on a substantial corpus of music criticism. One hundred critical reviews of Beethoven's piano sonata recordings, published in the Gramophone between August 1934 and July 2010, were selected using in-depth data reduction (qualitative/quantitative approach). The texts were then examined using thematic analysis in order to generate a visual descriptive model of expert critical review. This model reveals how the concept of evaluation permeates critical review. It also distinguishes between two types of descriptors. The first characterizes the performance in terms of specific actions or features of the musical sound (musical parameters, technique, and energy); the second appeals to higher-order properties (artistic style, character and emotion, musical structure, communicativeness) or assumed performer qualities (understanding, intentionality, spontaneity, sensibility, control, and care). The new model provides a methodological guide and conceptual basis for future studies of critical review in any genre.
Directory of Open Access Journals (Sweden)
Yalin Wu
2014-01-01
Full Text Available The temporal complexity of video sequences can be characterized by motion vector map which consists of motion vectors of each macroblock (MB. In order to obtain the optimal initial QP (quantization parameter for the various video sequences which have different spatial and temporal complexities, this paper proposes a simple and high performance initial QP determining method based on motion vector map and temporal complexity to decide an initial QP in given target bit rate. The proposed algorithm produces the reconstructed video sequences with outstanding and stable quality. For any video sequences, the initial QP can be easily determined from matrices by target bit rate and mapped spatial complexity using proposed mapping method. Experimental results show that the proposed algorithm can show more outstanding objective and subjective performance than other conventional determining methods.
Directory of Open Access Journals (Sweden)
Bryan D. Devan
2015-09-01
Full Text Available The publication of a recent article in F1000Research has led to discussion of, and correspondence on a broader issue that has a long history in the fields of neuroscience and psychology. Namely, is it possible to separate the cognitive components of performance, in this case spatial behavior, from the motoric demands of a task? Early psychological experiments attempted such a dissociation by studying a form of spatial maze learning where initially rats were allowed to explore a complex maze, termed “latent learning,” before reinforcement was introduced. Those rats afforded the latent learning experience solved the task faster than those that were not, implying that cognitive map learning during exploration aided in the performance of the task once a motivational component was introduced. This form of latent learning was interpreted as successfully demonstrating that an exploratory cognitive map component was acquired irrespective of performing a learned spatial response under deprivation/motivational conditions. The neural substrate for cognitive learning was hypothesized to depend on place cells within the hippocampus. Subsequent behavioral studies attempted to directly eliminate the motor component of spatial learning by allowing rats to passively view the distal environment before performing any motor response using a task that is widely considered to be hippocampal-dependent. Latent learning in the water maze, using a passive placement procedure has met with mixed results. One constraint on viewing cues before performing a learned swimming response to a hidden goal has been the act of dynamically viewing distal cues while moving through a part of the environment where an optimal learned spatial escape response would be observed. We briefly review these past findings obtained with adult animals to the recent efforts of establishing a “behavioral topology” separating cognitive-spatial learning from tasks differing in
Directory of Open Access Journals (Sweden)
Bryan D. Devan
2015-08-01
Full Text Available The publication of a recent article in F1000Research has led to discussion of, and correspondence on a broader issue that has a long history in the fields of neuroscience and psychology. Namely, is it possible to separate the cognitive components of performance, in this case spatial behavior, from the motoric demands of a task? Early psychological experiments attempted such a dissociation by studying a form of spatial maze learning where initially rats were allowed to explore a complex maze, termed “latent learning,” before reinforcement was introduced. Those rats afforded the latent learning experience solved the task faster than those that were not, implying that cognitive map learning during exploration aided in the performance of the task once a motivational component was introduced. This form of latent learning was interpreted as successfully demonstrating that an exploratory cognitive map component was acquired irrespective of performing a learned spatial response under deprivation/motivational conditions. The neural substrate for cognitive learning was hypothesized to depend on place cells within the hippocampus. Subsequent behavioral studies attempted to directly eliminate the motor component of spatial learning by allowing rats to passively view the distal environment before performing any motor response using a task that is widely considered to be hippocampal-dependent. Latent learning in the water maze, using a passive placement procedure has met with mixed results. One constraint on viewing cues before performing a learned swimming response to a hidden goal has been the act of dynamically viewing distal cues while moving through a part of the environment where an optimal learned spatial escape response would be observed. We briefly review these past findings obtained with adult animals to the recent efforts of establishing a “behavioral topology” separating cognitive-spatial learning from tasks differing in
Mapping the knowledge base for maritime health: 4 safety and performance at sea.
Carter, Tim
2011-01-01
There is very little recent investigative work on the contribution of health related impairment and disability to either accident risks or to reduced performance at sea, the only exception being studies on fatigue and parallel data on sleep related incidents. Incidents where health related impairment, other than fatigue, has contributed are very rarely found in reports of maritime accident investigations. This may either indicate the irrelevance of these forms of impairment to accidents or alternatively point to the effectiveness of existing control measures. The main approach to risk reduction is by the application of fitness criteria to seafarers during medical examinations. Where there is a knowledge base it is either, as in the case of vision, a very old one that relates to patterns of visual task that differ markedly from those in modern shipping or, as with hearing, is based on untested assumptions about the levels of impairment that will prevent effective communications at sea. There are practical limitations to the assessment of cognitive functions as these encompass such a wide range of impairments from those associated with fatigue, medication, or substance abuse to those relating to age or to the risks of sudden incapacitation from a pre-existing illness. Physical capability can be assessed but only in limited ways in the course of a medical examination. In the absence of clear evidence of accident risks associated with health-related impairments or disabilities it is unlikely that there will be pressure to update criteria that appear to be providing satisfactory protection. As capability is related to the tasks performed, investigations need to integrate information on ergonomic and organizational aspects with that on health and impairment. Criteria that may select seafarers with health- -related impairment need to be reviewed wherever the task demands in modern shipping have changed, in order to relax or modify them where indicated in order to reduce
Metscher, Jonathan F.; Lewandowski, Edward J.
2013-01-01
A simple model of the Advanced Stirling Convertors (ASC) linear alternator and an AC bus controller has been developed and combined with a previously developed thermodynamic model of the convertor for a more complete simulation and analysis of the system performance. The model was developed using Sage, a 1-D thermodynamic modeling program that now includes electro-magnetic components. The convertor, consisting of a free-piston Stirling engine combined with a linear alternator, has sufficiently sinusoidal steady-state behavior to allow for phasor analysis of the forces and voltages acting in the system. A MATLAB graphical user interface (GUI) has been developed to interface with the Sage software for simplified use of the ASC model, calculation of forces, and automated creation of phasor diagrams. The GUI allows the user to vary convertor parameters while fixing different input or output parameters and observe the effect on the phasor diagrams or system performance. The new ASC model and GUI help create a better understanding of the relationship between the electrical component voltages and mechanical forces. This allows better insight into the overall convertor dynamics and performance.
Ary Analisa Rahma
2014-01-01
Pengaruh Model Siklus Belajar Berbantuan Mind Map terhadap Prestasi Belajar Fisika Ditinjau dari Kinerja Laboratorium Siswa Kelas VIII SMPN 1 Rejoso Kabupaten Pasuruan Abstract: This study aimed to examine the effect of the learning cycle models aided the mind map on the learning achievement in terms of the performance of laboratory physics class VIII student on light material in SMP Negeri 1 Rejoso Pasuruan. This study is a quasi-experimental research. The research design used is a 2 x 2...
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Van Rheenen, Tamsyn E; Bryce, Shayden; Tan, Eric J; Neill, Erica; Gurvich, Caroline; Louise, Stephanie; Rossell, Susan L
2016-03-01
Despite known overlaps in the pattern of cognitive impairments in individuals with bipolar disorder (BD), schizophrenia (SZ) and schizoaffective disorder (SZA), few studies have examined the extent to which cognitive performance validates traditional diagnostic boundaries in these groups. Individuals with SZ (n=49), schizoaffective disorder (n=33) and BD (n=35) completed a battery of cognitive tests measuring the domains of processing speed, immediate memory, semantic memory, learning, working memory, executive function and sustained attention. A discriminant functions analysis revealed a significant function comprising semantic memory, immediate memory and processing speed that maximally separated patients with SZ from those with BD. Initial classification scores on the basis of this function showed modest diagnostic accuracy, owing in part to the misclassification of SZA patients as having SZ. When SZA patients were removed from the model, a second cross-validated classifier yielded slightly improved diagnostic accuracy and a single function solution, of which semantic memory loaded most heavily. A cluster of non-executive cognitive processes appears to have some validity in mapping onto traditional nosological boundaries. However, since semantic memory performance was the primary driver of the discrimination between BD and SZ, it is possible that performance differences between the disorders in this cognitive domain in particular, index separate underlying aetiologies. Copyright © 2015 Elsevier B.V. All rights reserved.
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Chandler, T L; Pralle, R S; Dórea, J R R; Poock, S E; Oetzel, G R; Fourdraine, R H; White, H M
2018-03-01
Although cowside testing strategies for diagnosing hyperketonemia (HYK) are available, many are labor intensive and costly, and some lack sufficient accuracy. Predicting milk ketone bodies by Fourier transform infrared spectrometry during routine milk sampling may offer a more practical monitoring strategy. The objectives of this study were to (1) develop linear and logistic regression models using all available test-day milk and performance variables for predicting HYK and (2) compare prediction methods (Fourier transform infrared milk ketone bodies, linear regression models, and logistic regression models) to determine which is the most predictive of HYK. Given the data available, a secondary objective was to evaluate differences in test-day milk and performance variables (continuous measurements) between Holsteins and Jerseys and between cows with or without HYK within breed. Blood samples were collected on the same day as milk sampling from 658 Holstein and 468 Jersey cows between 5 and 20 d in milk (DIM). Diagnosis of HYK was at a serum β-hydroxybutyrate (BHB) concentration ≥1.2 mmol/L. Concentrations of milk BHB and acetone were predicted by Fourier transform infrared spectrometry (Foss Analytical, Hillerød, Denmark). Thresholds of milk BHB and acetone were tested for diagnostic accuracy, and logistic models were built from continuous variables to predict HYK in primiparous and multiparous cows within breed. Linear models were constructed from continuous variables for primiparous and multiparous cows within breed that were 5 to 11 DIM or 12 to 20 DIM. Milk ketone body thresholds diagnosed HYK with 64.0 to 92.9% accuracy in Holsteins and 59.1 to 86.6% accuracy in Jerseys. Logistic models predicted HYK with 82.6 to 97.3% accuracy. Internally cross-validated multiple linear regression models diagnosed HYK of Holstein cows with 97.8% accuracy for primiparous and 83.3% accuracy for multiparous cows. Accuracy of Jersey models was 81.3% in primiparous and 83
Energy Technology Data Exchange (ETDEWEB)
Holland, D.M.P., E-mail: david.holland@stfc.ac.uk [Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom); Shaw, D.A. [Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom)
2012-12-10
Highlights: Black-Right-Pointing-Pointer Velocity map imaging spectrometer optimised for molecular photoionisation dynamics. Black-Right-Pointing-Pointer Kinetic energy distribution of O{sup +} fragments measured. Black-Right-Pointing-Pointer Effect of autoionisation on photoelectron vibrational populations studied. -- Abstract: The design, construction and performance of a velocity map imaging spectrometer for the study of molecular photoionisation dynamics is described. The spectrometer has been optimised for the efficient collection and detection of particles (electrons or positively charged ions) generated through the interaction of gas phase molecules with synchrotron radiation. A double Einzel lens, incorporated into the flight tube, enhances the collection efficiency of energetic particles. Computer modelling has been used to trace the trajectories of charged particles through the spectrometer and to assess the image quality. A time and position sensitive delay-line detector is used to record the images. Results from two experimental studies are presented to illustrate the capabilities of the spectrometer. In the first, the effect of electronic autoionisation on the vibrationally resolved photoelectron branching ratios of the N{sub 2}{sup +} X {sup 2}{Sigma}{sub g}{sup +} state has been investigated in an excitation range where prominent structure due to Rydberg states occurs in the ion yield curve. The results show that autoionisation leads to rotational branch populations that differ from those observed in direct, non-resonant, photoionisation. In the second, the kinetic energy distribution and the angular distribution of O{sup +} fragments formed in the dissociative photoionisation of molecular oxygen have been measured. The timing properties of the detector have allowed O{sup +} fragments to be separated from O{sub 2}{sup +} parent ions using time-of-flight techniques.
Directory of Open Access Journals (Sweden)
Farshad Fathian
2017-01-01
(2 years were chosen for validation, subjectively. As data have seasonal cycles, statistical indices (such as mean and standard deviation of daily discharge were estimated using Fourier series. Then ARMA and two- and three-regime SETAR models applied to the standardized daily river flow time series. Some performance criteria were used to evaluate the models accuracy. In other words, in this paper, linear and non-linear models such as ARMA and two- and three-regime SETAR models were fitted to observed river flows. The parameters associated to the models, e.g. the threshold value for the SETAR model was estimated. Finally, the fitted linear and non-linear models were selected using the Akaike Information Criterion (AIC, Root Mean Square (RMSE and Sum of Squared Residuals (SSR criteria. In order to check the adequacy of the fitted models the Ljung-Box test was used. Results and Discussion: To a certain degree the result of the river flow data of study area indicates that the threshold models may be appropriate for modeling and forecasting the streamflows of rivers located in the upstream part of Zarrineh Roud dam. According to the obtained evaluation criteria of fitted models, it can be concluded the performance of two- and three- regime SETAR models are slightly better than the ARMA model in all selected stations. As well as, modeling and comparison of SETAR models showed that the three-regime SETAR model have evaluation criteria better than two-regime SETAR model in all stations except Ghabghablou station. Conclusion: In the present study, we attempted to model daily streamflows of Zarrineh Rood Basin Rivers located in the south of Urmia Lake by applying ARMA and two- and three-regime SETAR models. This is mainly because very few efforts and rather less attention have been paid to this non-linear approach in hydrology and water resources engineering generally. Therefore, two types of data-driven models were used for modeling and forecasting daily streamflow: (i
Lundström, T; Jonas, T; Volkwein, A
2008-01-01
Thirteen Norway spruce [Picea abies (L.) Karst.] trees of different size, age, and social status, and grown under varying conditions, were investigated to see how they react to complex natural static loading under summer and winter conditions, and how they have adapted their growth to such combinations of load and tree state. For this purpose a non-linear finite-element model and an extensive experimental data set were used, as well as a new formulation describing the degree to which the exploitation of the bending stress capacity is uniform. The three main findings were: material and geometric non-linearities play important roles when analysing tree deflections and critical loads; the strengths of the stem and the anchorage mutually adapt to the local wind acting on the tree crown in the forest canopy; and the radial stem growth follows a mechanically high-performance path because it adapts to prevailing as well as acute seasonal combinations of the tree state (e.g. frozen or unfrozen stem and anchorage) and load (e.g. wind and vertical and lateral snow pressure). Young trees appeared to adapt to such combinations in a more differentiated way than older trees. In conclusion, the mechanical performance of the Norway spruce studied was mostly very high, indicating that their overall growth had been clearly influenced by the external site- and tree-specific mechanical stress.
Directory of Open Access Journals (Sweden)
R. Maisonny
2016-12-01
Full Text Available The performance of a 1 MV pulsed high-power linear transformer driver accelerator were extensively investigated based on a numerical approach which utilizes both electromagnetic and Monte Carlo simulations. Particle-in-cell calculations were employed to examine the beam dynamics throughout the magnetically insulated transmission line which governs the coupling between the generator and the electron diode. Based on the information provided by the study of the beam dynamics, and using Monte Carlo methods, the main properties of the resulting x radiation were predicted. Good agreement was found between these simulations and experimental results. This work provides a detailed understanding of mechanisms affecting the performances of this type of high current, high-voltage pulsed accelerator, which are very promising for a growing number of applications.
Iserbyt, Peter; Schouppe, Gilles; Charlier, Nathalie
2015-04-01
Research investigating lifeguards' performance of Basic Life Support (BLS) with Automated External Defibrillator (AED) is limited. Assessing simulated BLS/AED performance in Flemish lifeguards and identifying factors affecting this performance. Six hundred and sixteen (217 female and 399 male) certified Flemish lifeguards (aged 16-71 years) performed BLS with an AED on a Laerdal ResusciAnne manikin simulating an adult victim of drowning. Stepwise multiple linear regression analysis was conducted with BLS/AED performance as outcome variable and demographic data as explanatory variables. Mean BLS/AED performance for all lifeguards was 66.5%. Compression rate and depth adhered closely to ERC 2010 guidelines. Ventilation volume and flow rate exceeded the guidelines. A significant regression model, F(6, 415)=25.61, p<.001, ES=.38, explained 27% of the variance in BLS performance (R2=.27). Significant predictors were age (beta=-.31, p<.001), years of certification (beta=-.41, p<.001), time on duty per year (beta=-.25, p<.001), practising BLS skills (beta=.11, p=.011), and being a professional lifeguard (beta=-.13, p=.029). 71% of lifeguards reported not practising BLS/AED. Being young, recently certified, few days of employment per year, practising BLS skills and not being a professional lifeguard are factors associated with higher BLS/AED performance. Measures should be taken to prevent BLS/AED performances from decaying with age and longer certification. Refresher courses could include a formal skills test and lifeguards should be encouraged to practise their BLS/AED skills. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Morais, E C; Esmerino, E A; Monteiro, R A; Pinheiro, C M; Nunes, C A; Cruz, A G; Bolini, Helena M A
2016-01-01
The addition of prebiotic and sweeteners in chocolate dairy desserts opens up new opportunities to develop dairy desserts that besides having a lower calorie intake still has functional properties. In this study, prebiotic low sugar dairy desserts were evaluated by 120 consumers using a 9-point hedonic scale, in relation to the attributes of appearance, aroma, flavor, texture, and overall liking. Internal preference map using parallel factor analysis (PARAFAC) and principal component analysis (PCA) was performed using the consumer data. In addition, physical (texture profile) and optical (instrumental color) analyses were also performed. Prebiotic dairy desserts containing sucrose and sucralose were equally liked by the consumers. These samples were characterized by firmness and gumminess, which can be considered drivers of liking by the consumers. Optimization of the prebiotic low sugar dessert formulation should take in account the choice of ingredients that contribute in a positive manner for these parameters. PARAFAC allowed the extraction of more relevant information in relation to PCA, demonstrating that consumer acceptance analysis can be evaluated by simultaneously considering several attributes. Multiple factor analysis reported Rv value of 0.964, suggesting excellent concordance for both methods. © 2015 Institute of Food Technologists®
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2014-10-28
Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.
On two examples in linear topological spaces
International Nuclear Information System (INIS)
Iyahen, S.O.
1985-11-01
This note first gives examples of B-complete linear topological spaces, and shows that neither the closed graph theorem nor the open mapping theorem holds for linear mappings from such a space to itself. It then looks at Hausdorff linear topological spaces for which coarser Hausdorff linear topologies can be extended from hyperplanes. For B-complete spaces, those which are barrelled necessarily have countable dimension, and conversely. The paper had been motivated by two questions arising in earlier studies related to the closed graph and open mapping theorems; answers to these questions are contained therein. (author)
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
International Nuclear Information System (INIS)
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
Mapping monthly rainfall erosivity in Europe
DEFF Research Database (Denmark)
Ballabio, C; Meusburger, K; Klik, A
2017-01-01
to Eastern Europe. The maps also show a clear delineation of areas with different erosivity seasonal patterns, whose spatial outline was evidenced by cluster analysis. The monthly erosivity maps can be used to develop composite indicators that map both intra-annual variability and concentration of erosive...... and seasonal R-factor maps and assess rainfall erosivity both spatially and temporally. During winter months, significant rainfall erosivity is present only in part of the Mediterranean countries. A sudden increase of erosivity occurs in major part of European Union (except Mediterranean basin, western part...... selected among various statistical models to perform the spatial interpolation due to its excellent performance, ability to model non-linearity and interpretability. The monthly prediction is an order more difficult than the annual one as it is limited by the number of covariates and, for consistency...
Directory of Open Access Journals (Sweden)
Ferjan Ormeling
2008-09-01
Full Text Available Discussing the requirements for map data quality, map users and their library/archives environment, the paper focuses on the metadata the user would need for a correct and efficient interpretation of the map data. For such a correct interpretation, knowledge of the rules and guidelines according to which the topographers/cartographers work (such as the kind of data categories to be collected, and the degree to which these rules and guidelines were indeed followed are essential. This is not only valid for the old maps stored in our libraries and archives, but perhaps even more so for the new digital files as the format in which we now have to access our geospatial data. As this would be too much to ask from map librarians/curators, some sort of web 2.0 environment is sought where comments about data quality, completeness and up-to-dateness from knowledgeable map users regarding the specific maps or map series studied can be collected and tagged to scanned versions of these maps on the web. In order not to be subject to the same disadvantages as Wikipedia, where the ‘communis opinio’ rather than scholarship, seems to be decisive, some checking by map curators of this tagged map use information would still be needed. Cooperation between map curators and the International Cartographic Association ( ICA map and spatial data use commission to this end is suggested.
Computer Program For Linear Algebra
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido
2016-04-01
Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps
Joshi, Varsha; Kumar, Vijesh; Rathore, Anurag S
2015-08-07
A method is proposed for rapid development of a short, analytical cation exchange high performance liquid chromatography method for analysis of charge heterogeneity in monoclonal antibody products. The parameters investigated and optimized include pH, shape of elution gradient and length of the column. It is found that the most important parameter for development of a shorter method is the choice of the shape of elution gradient. In this paper, we propose a step by step approach to develop a non-linear sigmoidal shape gradient for analysis of charge heterogeneity for two different monoclonal antibody products. The use of this gradient not only decreases the run time of the method to 4min against the conventional method that takes more than 40min but also the resolution is retained. Superiority of the phosphate gradient over sodium chloride gradient for elution of mAbs is also observed. The method has been successfully evaluated for specificity, sensitivity, linearity, limit of detection, and limit of quantification. Application of this method as a potential at-line process analytical technology tool has been suggested. Copyright © 2015 Elsevier B.V. All rights reserved.
Gu, Jifeng; Wu, Weijun; Huang, Mengwei; Long, Fen; Liu, Xinhua; Zhu, Yizhun
2018-04-11
A method for high-performance liquid chromatography coupled with linear ion trap quadrupole Orbitrap high-resolution mass spectrometry (HPLC-LTQ-Orbitrap MS) was developed and validated for the qualitative and quantitative assessment of Shejin-liyan Granule. According to the fragmentation mechanism and high-resolution MS data, 54 compounds, including fourteen isoflavones, eleven ligands, eight flavonoids, six physalins, six organic acids, four triterpenoid saponins, two xanthones, two alkaloids, and one licorice coumarin, were identified or tentatively characterized. In addition, ten of the representative compounds (matrine, galuteolin, tectoridin, iridin, arctiin, tectorigenin, glycyrrhizic acid, irigenin, arctigenin, and irisflorentin) were quantified using the validated HPLC-LTQ-Orbitrap MS method. The method validation showed a good linearity with coefficients of determination (r²) above 0.9914 for all analytes. The accuracy of the intra- and inter-day variation of the investigated compounds was 95.0-105.0%, and the precision values were less than 4.89%. The mean recoveries and reproducibilities of each analyte were 95.1-104.8%, with relative standard deviations below 4.91%. The method successfully quantified the ten compounds in Shejin-liyan Granule, and the results show that the method is accurate, sensitive, and reliable.
Directory of Open Access Journals (Sweden)
Jifeng Gu
2018-04-01
Full Text Available A method for high-performance liquid chromatography coupled with linear ion trap quadrupole Orbitrap high-resolution mass spectrometry (HPLC-LTQ-Orbitrap MS was developed and validated for the qualitative and quantitative assessment of Shejin-liyan Granule. According to the fragmentation mechanism and high-resolution MS data, 54 compounds, including fourteen isoflavones, eleven ligands, eight flavonoids, six physalins, six organic acids, four triterpenoid saponins, two xanthones, two alkaloids, and one licorice coumarin, were identified or tentatively characterized. In addition, ten of the representative compounds (matrine, galuteolin, tectoridin, iridin, arctiin, tectorigenin, glycyrrhizic acid, irigenin, arctigenin, and irisflorentin were quantified using the validated HPLC-LTQ-Orbitrap MS method. The method validation showed a good linearity with coefficients of determination (r2 above 0.9914 for all analytes. The accuracy of the intra- and inter-day variation of the investigated compounds was 95.0–105.0%, and the precision values were less than 4.89%. The mean recoveries and reproducibilities of each analyte were 95.1–104.8%, with relative standard deviations below 4.91%. The method successfully quantified the ten compounds in Shejin-liyan Granule, and the results show that the method is accurate, sensitive, and reliable.
Jendeberg, Johan; Geijer, Håkan; Alshamari, Muhammed; Lidén, Mats
2018-01-24
To compare the ability of different size estimates to predict spontaneous passage of ureteral stones using a 3D-segmentation and to investigate the impact of manual measurement variability on the prediction of stone passage. We retrospectively included 391 consecutive patients with ureteral stones on non-contrast-enhanced CT (NECT). Three-dimensional segmentation size estimates were compared to the mean of three radiologists' measurements. Receiver-operating characteristic (ROC) analysis was performed for the prediction of spontaneous passage for each estimate. The difference in predicted passage probability between the manual estimates in upper and lower stones was compared. The area under the ROC curve (AUC) for the measurements ranged from 0.88 to 0.90. Between the automated 3D algorithm and the manual measurements the 95% limits of agreement were 0.2 ± 1.4 mm for the width. The manual bone window measurements resulted in a > 20 percentage point (ppt) difference between the readers in the predicted passage probability in 44% of the upper and 6% of the lower ureteral stones. All automated 3D algorithm size estimates independently predicted the spontaneous stone passage with similar high accuracy as the mean of three readers' manual linear measurements. Manual size estimation of upper stones showed large inter-reader variations for spontaneous passage prediction. • An automated 3D technique predicts spontaneous stone passage with high accuracy. • Linear, areal and volumetric measurements performed similarly in predicting stone passage. • Reader variability has a large impact on the predicted prognosis for stone passage.
D'Archivio, Angelo Antonio; Maggi, Maria Anna; Ruggieri, Fabrizio
2014-08-01
In this paper, a multilayer artificial neural network is used to model simultaneously the effect of solute structure and eluent concentration profile on the retention of s-triazines in reversed-phase high-performance liquid chromatography under linear gradient elution. The retention data of 24 triazines, including common herbicides and their metabolites, are collected under 13 different elution modes, covering the following experimental domain: starting acetonitrile volume fraction ranging between 40 and 60% and gradient slope ranging between 0 and 1% acetonitrile/min. The gradient parameters together with five selected molecular descriptors, identified by quantitative structure-retention relationship modelling applied to individual separation conditions, are the network inputs. Predictive performance of this model is evaluated on six external triazines and four unseen separation conditions. For comparison, retention of triazines is modelled by both quantitative structure-retention relationships and response surface methodology, which describe separately the effect of molecular structure and gradient parameters on the retention. Although applied to a wider variable domain, the network provides a performance comparable to that of the above "local" models and retention times of triazines are modelled with accuracy generally better than 7%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Nuclear Information System (INIS)
Clark, Leon; Shirinzadeh, Bijan; Tian, Yanling; Zhong, Yongmin
2014-01-01
This paper presents an analysis of the tracking performance of a planar three degrees of freedom (DOF) flexure-based mechanism for micro/nano manipulation, utilising a tracking methodology for the measurement of coupled linear and angular motions. The methodology permits trajectories over a workspace with large angular range through the reduction of geometric errors. However, when combining this methodology with feedback control systems, the accuracy of performed manipulations can only be stated within the bounds of the uncertainties in measurement. The dominant sources of error and uncertainty within each sensing subsystem are therefore identified, which leads to a formulation of the measurement uncertainty in the final system outputs, in addition to methods of reducing their magnitude. Specific attention is paid to the analysis of the vision-based subsystem utilised for the measurement of angular displacement. Furthermore, a feedback control scheme is employed to minimise tracking errors, and the coupling of certain measurement errors is shown to have a detrimental effect on the controller operation. The combination of controller tracking errors and measurement uncertainty provides the bounds on the final tracking performance. (paper)
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Differential maps, difference maps, interpolated maps, and long term prediction
International Nuclear Information System (INIS)
Talman, R.
1988-06-01
Mapping techniques may be thought to be attractive for the long term prediction of motion in accelerators, especially because a simple map can approximately represent an arbitrarily complicated lattice. The intention of this paper is to develop prejudices as to the validity of such methods by applying them to a simple, exactly solveable, example. It is shown that a numerical interpolation map, such as can be generated in the accelerator tracking program TEAPOT, predicts the evolution more accurately than an analytically derived differential map of the same order. Even so, in the presence of ''appreciable'' nonlinearity, it is shown to be impractical to achieve ''accurate'' prediction beyond some hundreds of cycles of oscillation. This suggests that the value of nonlinear maps is restricted to the parameterization of only the ''leading'' deviation from linearity. 41 refs., 6 figs
Directory of Open Access Journals (Sweden)
Jennifer L Smith
Full Text Available Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF, generally collected using the recommended gold-standard cluster randomized surveys (CRS. Integrated Threshold Mapping (ITM has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters.Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i the district prevalence of TF; (ii the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii the enrollment rate in schools.Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates
Corcodel, N; Rammelsberg, P; Jakstat, H; Moldovan, O; Schwarz, S; Hassel, A J
2010-11-01
Visual tooth colour assessment by use of the Vita 3D-Master(®) (3D; Vita Zahnfabrik, Bad Säckingen, Germany) is well documented. To improve handling, a new linear arrangement of the shade tabs has been introduced (LG; Linearguide 3D-Master(®) ). The purpose of this study was to investigate whether the linear design has an effect on shade matching. Fifty-six students underwent identical, theoretical and practical training, by use of an Internet learning module [Toothguide Training Software(®) (TT)] and a standardised training programme [Toothguide Training Box(®) (TTB)]. Each student then matched 30 randomly chosen shade tabs presented in an intra-oral setting by a standardised device [Toothguide Check Box(®) (TCB)]; 15 matches were made using the 3D and 15 using the LG shade guide system, under a daylight lamp (840 matches for each guide). It was recorded to what extent the presented and selected shade tabs, or the lightness group of the tabs, matched, also the needed time for colour matching. The results showed that 35% of perfect matches were observed for the 3D and 32% for the LG. The lightness group was correct in 59% of cases for 3D and 56% for LG. Mean time needed for matching of tabs and lightness group was no different between groups (no significant difference for any assessment). Within the limitations of the study design, the colour assessment with regard to performance and time needed in shade matching was not different with the LG or the 3D. Therefore, the user should choose which shade tab arrangement is more applicable. © 2010 Blackwell Publishing Ltd.
Energy Technology Data Exchange (ETDEWEB)
Ghenescu, V., E-mail: veta.ghenescu@cern.ch [Institute of Space Science, Bucharest-Magurele (Romania); Benhammou, Y. [Tel Aviv University, TelAviv (Israel)
2017-02-11
The FCAL collaboration is preparing large scale prototypes of special calorimeters to be used in the very forward region at a future linear electron positron collider for a precise and fast luminosity measurement and beam-tuning. These calorimeters are designed as sensor-tungsten calorimeters with very thin sensor planes to keep the Moliere radius small and dedicated FE electronics to match the timing and dynamic range requirements. A partially instrumented prototype was investigated in the CERN PS T9 beam in 2014 and at the DESY-II Synchrotron in 2015. It was operated in a mixed particle beam (electrons, muons and hadrons) of 5 GeV from PS facilities and with secondary electrons of 5 GeV energy from DESY-II. The results demonstrated a very good performance of the full readout chain. The high statistics data were used to study the response to different particles, perform sensor alignment and measure the longitudinal shower development in the sandwich. In addition, Geant4 MC simulations were done, and compared to the data.
Energy Technology Data Exchange (ETDEWEB)
Lu, SH; Tsai, YC; Lan, HT; Wen, SY; Chen, LH; Kuo, SH; Wang, CW [National Taiwan University Hospital, Taipei City, Taiwan (China)
2016-06-15
Purpose: Intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) have been widely investigated for use in radiotherapy and found to have a highly conformal dose distribution. Delta{sup 4} is a novel cylindrical phantom consisting of 1069 p-type diodes with true treatments measured in the 3D target volume. The goal of this study was to compare the performance of a Delta{sup 4} diode array for IMRT and VMAT planning with ion chamber and MapCHECK2. Methods: Fifty-four IMRT (n=9) and VMAT (n=45) plans were imported to Philips Pinnacle Planning System 9.2 for recalculation with a solid water phantom, MapCHECK2, and the Delta4 phantom. To evaluate the difference between the measured and calculated dose, we used MapCHECK2 and Delta{sup 4} for a dose-map comparison and an ion chamber (PTW 31010 Semiflex 0.125 cc) for a point-dose comparison. Results: All 54 plans met the criteria of <3% difference for the point dose (at least two points) by ion chamber. The mean difference was 0.784% with a standard deviation of 1.962%. With a criteria of 3 mm/3% in a gamma analysis, the average passing rates were 96.86%±2.19% and 98.42%±1.97% for MapCHECK2 and Delta{sup 4}, respectively. The student t-test of MapCHECK2/Delta{sup 4}, ion chamber/Delta{sup 4}, and ion chamber/MapCHECK2 were 0.0008, 0.2944, and 0.0002, respectively. There was no significant difference in passing rates between MapCHECK2 and Delta{sup 4} for the IMRT plan (p = 0.25). However, a higher pass rate was observed in Delta{sup 4} (98.36%) as compared to MapCHECK2 (96.64%, p < 0.0001) for the VMAT plan. Conclusion: The Pinnacle planning system can accurately calculate doses for VMAT and IMRT plans. The Delta{sup 4} shows a similar result when compared to ion chamber and MapCHECK2, and is an efficient tool for patient-specific quality assurance, especially for rotation therapy.
Directory of Open Access Journals (Sweden)
Sayed M. Arafat
2014-06-01
Full Text Available Land cover map of North Sinai was produced based on the FAO-Land Cover Classification System (LCCS of 2004. The standard FAO classification scheme provides a standardized system of classification that can be used to analyze spatial and temporal land cover variability in the study area. This approach also has the advantage of facilitating the integration of Sinai land cover mapping products to be included with the regional and global land cover datasets. The total study area is covering a total area of 20,310.4 km2 (203,104 hectare. The landscape classification was based on SPOT4 data acquired in 2011 using combined multispectral bands of 20 m spatial resolution. Geographic Information System (GIS was used to manipulate the attributed layers of classification in order to reach the maximum possible accuracy. GIS was also used to include all necessary information. The identified vegetative land cover classes of the study area are irrigated herbaceous crops, irrigated tree crops and rain fed tree crops. The non-vegetated land covers in the study area include bare rock, bare soils (stony, very stony and salt crusts, loose and shifting sands and sand dunes. The water bodies were classified as artificial perennial water bodies (fish ponds and irrigated canals and natural perennial water bodies as lakes (standing. The artificial surfaces include linear and non-linear features.
DEFF Research Database (Denmark)
Dehlholm, Christian; Brockhoff, Per B.; Bredie, Wender Laurentius Petrus
2012-01-01
by the practical testing environment. As a result of the changes, a reasonable assumption would be to question the consequences caused by the variations in method procedures. Here, the aim is to highlight the proven or hypothetic consequences of variations of Projective Mapping. Presented variations will include...... instructions and influence heavily the product placements and the descriptive vocabulary (Dehlholm et.al., 2012b). The type of assessors performing the method influences results with an extra aspect in Projective Mapping compared to more analytical tests, as the given spontaneous perceptions are much dependent......Projective Mapping (Risvik et.al., 1994) and its Napping (Pagès, 2003) variations have become increasingly popular in the sensory field for rapid collection of spontaneous product perceptions. It has been applied in variations which sometimes are caused by the purpose of the analysis and sometimes...
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Directory of Open Access Journals (Sweden)
Nauman Khalid Qureshi
2017-07-01
Full Text Available In this paper, a novel methodology for enhanced classification of functional near-infrared spectroscopy (fNIRS signals utilizable in a two-class [motor imagery (MI and rest; mental rotation (MR and rest] brain–computer interface (BCI is presented. First, fNIRS signals corresponding to MI and MR are acquired from the motor and prefrontal cortex, respectively, afterward, filtered to remove physiological noises. Then, the signals are modeled using the general linear model, the coefficients of which are adaptively estimated using the least squares technique. Subsequently, multiple feature combinations of estimated coefficients were used for classification. The best classification accuracies achieved for five subjects, for MI versus rest are 79.5, 83.7, 82.6, 81.4, and 84.1% whereas those for MR versus rest are 85.5, 85.2, 87.8, 83.7, and 84.8%, respectively, using support vector machine. These results are compared with the best classification accuracies obtained using the conventional hemodynamic response. By means of the proposed methodology, the average classification accuracy obtained was significantly higher (p < 0.05. These results serve to demonstrate the feasibility of developing a high-classification-performance fNIRS-BCI.
Directory of Open Access Journals (Sweden)
Chouaib Labiod
2017-01-01
Full Text Available This paper presents torque ripple reduction with speed control of 8/6 Switched Reluctance Motor (SRM by the determination of the optimal parameters of the turn on, turn off angles Theta_(on, Theta_(off, and the supply voltage using Particle Swarm Optimization (PSO algorithm and steady state Genetic Algorithm (ssGA. With SRM model, there is difficulty in the control relapsed into highly non-linear static characteristics. For this, the Finite Elements Method (FEM has been used because it is a powerful tool to get a model closer to reality. The mechanism used in this kind of machine control consists of a speed controller in order to determine current reference which must be produced to get the desired speed, hence, hysteresis controller is used to compare current reference with current measured up to achieve switching signals needed in the inverter. Depending on this control, the intelligent routing algorithms get the fitness equation from torque ripple and speed response so as to give the optimal parameters for better results. Obtained results from the proposed strategy based on metaheuristic methods are compared with the basic case without considering the adjustment of specific parameters. Optimized results found clearly confirmed the ability and the efficiency of the proposed strategy based on metaheuristic methods in improving the performances of the SRM control considering different torque loads.
International Nuclear Information System (INIS)
Mitsumoto, Tatsuya; Sakaguchi, Yuichi; Morishita, Junji; Sasaki, Masayuki; Ohya, Nobuyoshi; Abe, Koichiro; Ichimiya, Atsushi; Kiyota, Aya
2009-01-01
This study examined the influence of linearization correction (LC) on brain perfusion single-photon emission computed tomography (SPECT) for the diagnosis of Alzheimer's disease (AD). The early onset group (<65 years old) consisted of 10 patients with AD, and the late onset group (≥65 years old) of 13 patients with AD. Age-matched controls included seven younger and seven older normal volunteers. Tc-99m hexamethyl propyleneamine oxine (HMPAO) SPECT images were reconstructed with or without LC [LC (+) or LC (-)] and a statistical analysis was performed using a three-dimensional stereotactic surface projection (3D-SSP). In addition, a fully automatic diagnostic system was developed, which calculated the proportion of the number of abnormal pixels in the superior and inferior parietal lobule, as well as in the precuneus and posterior cingulate gyrus. The areas under the receiver-operating characteristic curve (AUCs) of the early onset group for conventional axial SPECT images, SPECT+3D-SSP images and the fully automatic diagnostic system were 0.71, 0.88, and 0.92 in LC (-) and 0.67, 0.85, and 0.91 in LC (+), respectively. The AUCs of the late onset group were 0.50, 0.61, and 0.79 in LC (-) and 0.49, 0.67, and 0.85 in LC (+), respectively. LC on Tc-99m HMPAO SPECT did not significantly influence the diagnostic performance for differentiating between AD and normal controls in either early or late onset AD. Further examination with individuals suffering from very mild dementia is, therefore, expected to elucidate the effect of LC on minimally hypoperfused areas. (author)
Bruyndonckx, Robin; Aerts, Marc; Hens, Niel
2016-09-01
In a linear multilevel model, significance of all fixed effects can be determined using F tests under maximum likelihood (ML) or restricted maximum likelihood (REML). In this paper, we demonstrate that in the presence of primary unit sparseness, the performance of the F test under both REML and ML is rather poor. Using simulations based on the structure of a data example on ceftriaxone consumption in hospitalized children, we studied variability, type I error rate and power in scenarios with a varying number of secondary units within the primary units. In general, the variability in the estimates for the effect of the primary unit decreased as the number of secondary units increased. In the presence of singletons (i.e., only one secondary unit within a primary unit), REML consistently outperformed ML, although even under REML the performance of the F test was found inadequate. When modeling the primary unit as a random effect, the power was lower while the type I error rate was unstable. The options of dropping, regrouping, or splitting the singletons could solve either the problem of a high type I error rate or a low power, while worsening the other. The permutation test appeared to be a valid alternative as it outperformed the F test, especially under REML. We conclude that in the presence of singletons, one should be careful in using the F test to determine the significance of the fixed effects, and propose the permutation test (under REML) as an alternative. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hiratsuka, Y.; Bao, Q.; Y Xu, M.
2017-12-01
Since 2012, a new, compact Gifford-McMahon (GM) cryocooler for cooling superconducting single photon detectors (SSPD) has been developed and reported by Sumitomo Heavy Industries, Ltd. (SHI). Also, it was reported that National Institute of Information and Communications Technology (NICT) developed a multi-channel, conduction-cooled SSPD system. However, the size and power consumption reduction becomes indispensable to apply such a system to the optical communication of AdHoc for a mobile system installed in a vehicle. The objective is to reduce the total height of the expander by 33% relative to the existing RDK-101 GM expander and to reduce the total volume of the compressor unit by 50% relative to the existing CNA-11 compressor. In addition, considering the targeted cooling application, we set the design cooling capacity targets of the first and the second stages 1 W at 60 K and 20 mW at 2.3 K respectively. In 2016, Hiratsuka et al. reported that an oil-free compressor was developed for a 2K GM cryocooler. The cooling performance of a 2K GM expander driven by an experimental unit of the linear compressor was measured. No-load temperature less than 2.1 K and the cooling capacity of 20 mW at 2.3 K were successfully achieved with an electric input power of only 1.1 kW. After that, the compressor capsule and the heat exchanger, etc. were assembled into one enclosure as a compressor unit. The total volume of the compressor unit and electrical box was significantly reduced to about 38 L, which was close to the target of 35 L. Also, the sound noise, vibration characteristics, the effect of the compressor unit inclination and the ambient temperature on the cooling performance, were evaluated. The detailed experimental results are discussed in this paper.
Mengerink, Y; Peters, R; Kerkhoff, M; Hellenbrand, J; Omloo, H; Andrien, J; Vestjens, M; van der Wal, S
2000-05-05
By separating the first six linear and cyclic oligomers of polyamide-6 on a reversed-phase high-performance liquid chromatographic system after sandwich injection, quantitative determination of these oligomers becomes feasible. Low-wavelength UV detection of the different oligomers and selective post-column reaction detection of the linear oligomers with o-phthalic dicarboxaldehyde (OPA) and 3-mercaptopropionic acid (3-MPA) are discussed. A general methodology for quantification of oligomers in polymers was developed. It is demonstrated that the empirically determined group-equivalent absorption coefficients and quench factors are a convenient way of quantifying linear and cyclic oligomers of nylon-6. The overall long-term performance of the method was studied by monitoring a reference sample and the calibration factors of the linear and cyclic oligomers.
Zhang, Jie; Fan, Yeqin; Gong, Yajun; Chen, Xiaoyong; Wan, Luosheng; Zhou, Chenggao; Zhou, Jiewen; Ma, Shuangcheng; Wei, Feng; Chen, Jiachun; Nie, Jing
2017-11-15
Snake bile is one of the most expensive traditional Chinese medicines (TCMs). However, due to the complicated constitutes of snake bile and the poor ultraviolet absorbance of some trace bile acids (BAs), effective analysis methods for snake bile acids were still unavailable, making it difficult to solve adulteration problems. In present study, ultrahigh-performance liquid chromatography with triple quadrupole linear ion trap mass spectrometry (UHPLC-QqQ-MS/MS) was applied to conduct a quantitative analysis on snake BAs. The mass spectrometer was monitored in the negative ion mode, and multiple-reaction monitoring (MRM) program was used to determine the contents of BAs in snake bile. In all, 61 snake bile from 17 commonly used species of three families (Elapidae, Colubridae and Viperidae), along with five batches of commercial snake bile from four companies, were collected and detected. Nine components, Tauro-3α,12α-dihydroxy-7-oxo-5β-cholenoic acid (T1), Tauro-3α,7α,12α,23R-tetrahydroxy-5β-cholenoic acid (T2), taurocholic acid (TCA), glycocholic acid (GCA), taurochenodeoxycholic acid (TCDCA), taurodeoxycholic acid (TDCA), cholic acid (CA), Tauro-3α,7α-dihydroxy-12-oxo-5β-cholenoic acid (T3), and Tauro-3α,7α,9α,16α-tetrahydroxy-5β-cholenoic acid (T4) were simultaneously and rapidly determined for the first time. In these BAs, T1 and T2, self-prepared with purity above 90%, were first reported with their quantitative determination, and the latter two (T3 and T4) were tentatively determined by quantitative analysis multi-components by single marker (QAMS) method for roughly estimating the components without reference. The developed method was validated with acceptable linearity (r 2 ≥0.995), precision (RSD＜6.5%) and recovery (RSD＜7.5%). It turned out that the contents of BAs among different species were also significantly different; T1 was one of the principle bile acids in some common snake bile, and also was the characteristic one in Viperidae
Energy Technology Data Exchange (ETDEWEB)
Santos, G.B.; Pinheiro Neto, D.; Lisita, L.R.; Machado, P.C.M.; Oliveira, J.V.M. [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Escola de Engenharia Eletrica e de Computacao], Emails: guilhermebsantos@gmail.com, daywes@gmail.com, lrlisi-ta@gmail.com, pcesar@eee.ufg.br, joao.eee@gmail.com
2009-07-01
This paper analyzes the behavior of a electronic meter of single-phase in the laboratory when it is subjected to a environment with linear loads and nonlinear loads kind residential and commercial. It differs from correlated studies mainly for making use of real loads encountered in day-to-day, rather than as sources of electronic loads how has been observed in the state of the art. The comparison of results is made based on high precision energy pattern developed by virtual instrumentation means.
Directory of Open Access Journals (Sweden)
Tanwiwat Jaikuna
2017-02-01
Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Anumula, K R; Dhume, S T
1998-07-01
Facile labeling of oligosaccharides (acidic and neutral) in a nonselective manner was achieved with highly fluorescent anthranilic acid (AA, 2-aminobenzoic acid) (more than twice the intensity of 2-aminobenzamide, AB) for specific detection at very high sensitivity. Quantitative labeling in acetate-borate buffered methanol (approximately pH 5.0) at 80 degreesC for 60 min resulted in negligible or no desialylation of the oligosaccharides. A high resolution high performance liquid chromatographic method was developed for quantitative oligosaccharide mapping on a polymeric-NH2bonded (Astec) column operating under normal phase and anion exchange (NP-HPAEC) conditions. For isolation of oligosaccharides from the map by simple evaporation, the chromatographic conditions developed use volatile acetic acid-triethylamine buffer (approximately pH 4.0) systems. The mapping and characterization technology was developed using well characterized standard glycoproteins. The fluorescent oligosaccharide maps were similar to the maps obtained by the high pH anion-exchange chromatography with pulsed amperometric detection (HPAEC-PAD), except that the fluorescent maps contained more defined peaks. In the map, the oligosaccharides separated into groups based on charge, size, linkage, and overall structure in a manner similar to HPAEC-PAD with contribution of -COOH function from the label, anthranilic acid. However, selectivity of the column for sialic acid linkages was different. A second dimension normal phase HPLC (NP-HPLC) method was developed on an amide column (TSK Gel amide-80) for separation of the AA labeled neutral complex type and isomeric structures of high mannose type oligosaccharides. The oligosaccharides labeled with AA are compatible with biochemical and biophysical techniques, and use of matrix assisted laser desorption mass spectrometry for rapid determination of oligosaccharide mass map of glycoproteins is demonstrated. High resolution of NP-HPAEC and NP-HPLC methods
International Nuclear Information System (INIS)
Kobulnicky, K; Pawlak, D; Purwar, A
2015-01-01
Purpose: To examine the beam performance of a Varian TrueBeam linear accelerator under external device-based gated delivery conditions. Methods: Six gating cycles were used to evaluate the gating performance of a standard production TrueBeam system that was not specially tuned in any way. The system was equipped with a factory installed external gating interface (EXGI). An in-house EXGI tester box was used to simulate the input gating signals. The gating cycles were selected based on long beam-on and short beam-off times, short beam-on and long beam-off times, or equal beam on and off times to check linac performance. The beam latencies were measured as the time difference between the logic high gating signal and the first or last target pulses with an oscilloscope. Tissue-Phantom Ratio, beam flatness, and dose distributions from 5 different plans were measured using the 6 different gating durations and the un-gated irradiation. A PTW 729 2-D array was used to compare 5 plans versus the un-gated delivery with a 1%/1mm gamma index passing criteria. Results: The beam latencies of the linac were based off of 20 samples for beam-on and beam-off, for each gating cycle. The average beam-on delays were measured to be between 57 and 66msec, with a maximum of 88 msec. The beam off latencies averaged between 19 and 26msec, with a maximum of 48 msec. TPR20,10 measurements showed beam energy stability within 0.5% of the un-gated delivery. Beam flatness was better than 2.5% for all gated cycles. All but two deliveries, the open field with 4 seconds on, 1 second off, and a five field IMRT plan with 0.5 seconds on, 2.5 seconds off, had >90% passing rate. Conclusion: TrueBeam demonstrates excellent beam stability with minimal beam latencies under external device-based gated operations. Dosimetric measurements show minimal variation in beam energy, flatness, and plan delivery. Authors are employees of Varian Medical Systems, Inc
Energy Technology Data Exchange (ETDEWEB)
Kobulnicky, K; Pawlak, D; Purwar, A [Varian Medical Systems, Inc., Palo Alto, CA (United States)
2015-06-15
Purpose: To examine the beam performance of a Varian TrueBeam linear accelerator under external device-based gated delivery conditions. Methods: Six gating cycles were used to evaluate the gating performance of a standard production TrueBeam system that was not specially tuned in any way. The system was equipped with a factory installed external gating interface (EXGI). An in-house EXGI tester box was used to simulate the input gating signals. The gating cycles were selected based on long beam-on and short beam-off times, short beam-on and long beam-off times, or equal beam on and off times to check linac performance. The beam latencies were measured as the time difference between the logic high gating signal and the first or last target pulses with an oscilloscope. Tissue-Phantom Ratio, beam flatness, and dose distributions from 5 different plans were measured using the 6 different gating durations and the un-gated irradiation. A PTW 729 2-D array was used to compare 5 plans versus the un-gated delivery with a 1%/1mm gamma index passing criteria. Results: The beam latencies of the linac were based off of 20 samples for beam-on and beam-off, for each gating cycle. The average beam-on delays were measured to be between 57 and 66msec, with a maximum of 88 msec. The beam off latencies averaged between 19 and 26msec, with a maximum of 48 msec. TPR20,10 measurements showed beam energy stability within 0.5% of the un-gated delivery. Beam flatness was better than 2.5% for all gated cycles. All but two deliveries, the open field with 4 seconds on, 1 second off, and a five field IMRT plan with 0.5 seconds on, 2.5 seconds off, had >90% passing rate. Conclusion: TrueBeam demonstrates excellent beam stability with minimal beam latencies under external device-based gated operations. Dosimetric measurements show minimal variation in beam energy, flatness, and plan delivery. Authors are employees of Varian Medical Systems, Inc.
Hernández, Alison R; Hurtig, Anna-Karin; Dahlblom, Kjerstin; San Sebastián, Miguel
2015-10-08
Mid-level health workers are on the front-lines in underserved areas in many LMICs, and their performance is critical for improving the health of vulnerable populations. However, improving performance in low-resource settings is complex and highly dependent on the organizational context of local health systems. This study aims to examine the views of actors from different levels of a regional health system in Guatemala on actions to support the performance of auxiliary nurses, a cadre of mid-level health workers with a prominent role in public sector service delivery. A concept mapping study was carried out to develop an integrated view on organizational support and identify locally relevant strategies for strengthening performance. A total of 93 regional and district managers, and primary and secondary care health workers participated in generating ideas on actions needed to support auxiliary nurses' performance. Ideas were consolidated into 30 action items, which were structured through sorting and rating exercises, involving a total of 135 of managers and health workers. Maps depicting participants' integrated views on domains of action and dynamics in sub-groups' interests were generated using a sequence of multivariate statistical analyses, and interpreted by regional managers. The combined input of health system actors provided a multi-faceted view of actions needed to support performance, which were organized in six domains, including: Communication and coordination, Tools to orient work, Organizational climate of support, Motivation through recognition, Professional development and Skills development. The nature of relationships across hierarchical levels was identified as a cross-cutting theme. Pattern matching and go-zone maps indicated directions for action based on areas of consensus and difference across sub-groups of actors. This study indicates that auxiliary nurses' performance is interconnected with the performance of other health system actors who
International Nuclear Information System (INIS)
Pham, H.S.; Alpy, N.; Ferrasse, J.H.; Boutin, O.; Tothill, M.; Quenaut, J.; Gastaldi, O.; Cadiou, T.; Saez, M.
2016-01-01
Highlights: • Ability of CFD to predict the performance of a sc-CO_2 test compressor is shown. • Risk of vapor pockets occurrence inside a scale 1:1 compressor is highlighted. • Limitation of previous performance maps approaches to model the real gas behavior is shown. • A performance maps approach for the sc-CO_2 compressor is proposed and validated. - Abstract: One of the challenges in the performance prediction of the supercritical CO_2 (sc-CO_2) compressor is the real gas behavior of the working fluid near the critical point. This study deals with the establishment of an approach that allows coping with this particularity by dressing compressor performance maps in adequate reduced coordinates (i.e., suitable dimensionless speed and flow parameters inputs and pressure ratio and enthalpy rise outputs), while using CFD for its validation. Two centrifugal compressor designs have been considered in this work. The first one corresponds to a 6 kW small scale component implemented in a test loop at Tokyo Institute of Technology. The second one corresponds to a 38 MW scale 1:1 design considered at an early stage of a project that investigates sc-CO_2 cycle for a Small Modular Reactor application. Numerical results on the former have been successfully confronted with the experimental data to qualify the ability of CFD to provide a performance database. Results on the latter have revealed a significant decrease in the static temperature and pressure during flow acceleration along the leading edge of the impeller blades. In this line, the increased risk of vapor pockets appearance inside a sc-CO_2 compressor has been highlighted and recommendations regarding the choice of the on-design inlet conditions and the compressor design have been given to overcome this concern. CFD results on the scale 1:1 compressor have then been used to evaluate the relevancy of some previous performance maps approaches for a sc-CO_2 compressor application. These include the conventional
Shang, Zhanpeng; Wang, Fei; Dai, Shengyun; Lu, Jianqiu; Wu, Xiaodan; Zhang, Jiayu
2017-08-01
(-)-Epicatechin (EC), an optical antipode of (+)-catechin (C), possesses many potential significant health benefits. However, the in vivo metabolic pathway of EC has not been clarified yet. In this study, an efficient strategy based on ultra-high performance liquid chromatography coupled with a linear ion trap-Orbitrap mass spectrometer was developed to profile and characterize EC metabolites in rat urine, faeces, plasma, and various tissues. Meanwhile, post-acquisition data-mining methods including high-resolution extracted ion chromatogram (HREIC), multiple mass defect filters (MMDFs), and diagnostic product ions (DPIs) were utilized to screen and identify EC metabolites from HR-ESI-MS 1 to ESI-MS n stage. Finally, a total of 67 metabolites (including parent drug) were tentatively identified based on standard substances, chromatographic retention times, accurate mass measurement, and relevant drug biotransformation knowledge. The results demonstrated that EC underwent multiple in vivo metabolic reactions including methylation, dehydration, hydrogenation, glucosylation, sulfonation, glucuronidation, ring-cleavage, and their composite reactions. Among them, methylation, dehydration, glucosylation, and their composite reactions were observed only occurring on EC when compared with C. Meanwhile, the distribution of these detected metabolites in various tissues including heart, liver, spleen, lung, kidney, and brain were respectively studied. The results demonstrated that liver and kidney were the most important organs for EC and its metabolites elimination. In conclusion, the newly discovered EC metabolites significantly expanded the understanding on its pharmacological effects and built the foundation for further toxicity and safety studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Bernardin, John D.; Ammerman, Curtt N.; Hopkins, Steve M.
2002-01-01
The Spallation Neutron Source (SNS) is a facility being designed for scientific and industrial research and development. The SNS will generate and employ neutrons as a research tool in a variety of disciplines including biology, material science, superconductivity, chemistry, etc. The neutrons will be produced by bombarding a heavy metal target with a high-energy beam of protons, generated and accelerated with a linear particle accelerator, or linac. The low energy end of the linac consists of, in part, a multi-cell copper structure termed a coupled cavity linac (CCL). The CCL is responsible for accelerating the protons from an energy of 87 MeV, to 185 MeV. Acceleration of the charged protons is achieved by the use of large electrical field gradients established within specially designed contoured cavities of the CCL. While a large amount of the electrical energy is used to accelerate the protons, approximately 60-80% of this electrical energy is dissipated in the CCL's copper structure. To maintain an acceptable operating temperature, as well as minimize thermal stresses and maintain desired contours of the accelerator cavities, the electrical waste heat must be removed from the CCL structure. This is done using specially designed water cooling passages within the linac's copper structure. Cooling water is supplied to these cooling passages by a complex water cooling and temperature control system. This paper discusses the design, analysis, and testing of a water cooling system for a prototype CCL. First, the design concept and method of water temperature control is discussed. Second, the layout of the prototype water cooling system, including the selection of plumbing components, instrumentation, as well as controller hardware and software is presented. Next, the development of a numerical network model used to size the pump, heat exchanger, and plumbing equipment, is discussed. Finally, empirical pressure, flow rate, and temperature data from the prototype CCL
Inferring the most probable maps of underground utilities using Bayesian mapping model
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
Analytical exact solution of the non-linear Schroedinger equation
International Nuclear Information System (INIS)
Martins, Alisson Xavier; Rocha Filho, Tarcisio Marciano da
2011-01-01
Full text: In this work we present how to classify and obtain analytical solutions of the Schroedinger equation with a generic non-linearity in 1+1 dimensions. Our approach is based on the determination of Lie symmetry transformation mapping solutions into solutions, and non-classical symmetry transformations, mapping a given solution into itself. From these symmetries it is then possible to reduce the equation to a system of ordinary differential equations which can then be solved using standard methods. The generic non-linearity is handled by considering it as an additional unknown in the determining equations for the symmetry transformations. This results in an over-determined system of non-linear partial differential equations. Its solution can then be determined in some cases by reducing it to the so called involutive (triangular) form, and then solved. This reduction is very tedious and can only performed using a computer algebra system. Once the determining system is solved, we obtain the explicit form for the non-linearity admitting a Lie or non-classical symmetry. The analytical solutions are then derived by solving the reduced ordinary differential equations. The non-linear determining system for the non-classical symmetry transformations and Lie symmetry generators are obtaining using the computer algebra package SADE (symmetry analysis of differential equations), developed at our group. (author)
Supervised linear dimensionality reduction with robust margins for object recognition
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
Linear polarized fluctuations in the cosmic microwave background
International Nuclear Information System (INIS)
Partridge, R.B.; Nowakowski, J.; Martin, H.M.
1988-01-01
We report here limits on the linear (and circular) polarization of the cosmic microwave background on small angular scales, 18''≤ θ ≤ 160''. The limits are based on radio maps of Stokes parameters and polarisation (linear and circular). (author)
International Nuclear Information System (INIS)
Anon.
1994-01-01
The aim of the TESLA (TeV Superconducting Linear Accelerator) collaboration (at present 19 institutions from seven countries) is to establish the technology for a high energy electron-positron linear collider using superconducting radiofrequency cavities to accelerate its beams. Another basic goal is to demonstrate that such a collider can meet its performance goals in a cost effective manner. For this the TESLA collaboration is preparing a 500 MeV superconducting linear test accelerator at the DESY Laboratory in Hamburg. This TTF (TESLA Test Facility) consists of four cryomodules, each approximately 12 m long and containing eight 9-cell solid niobium cavities operating at a frequency of 1.3 GHz
Matching by Monotonic Tone Mapping.
Kovacs, Gyorgy
2018-06-01
In this paper, a novel dissimilarity measure called Matching by Monotonic Tone Mapping (MMTM) is proposed. The MMTM technique allows matching under non-linear monotonic tone mappings and can be computed efficiently when the tone mappings are approximated by piecewise constant or piecewise linear functions. The proposed method is evaluated in various template matching scenarios involving simulated and real images, and compared to other measures developed to be invariant to monotonic intensity transformations. The results show that the MMTM technique is a highly competitive alternative of conventional measures in problems where possible tone mappings are close to monotonic.
Brooks, E. M.; Stein, S.; Spencer, B. D.; Salditch, L.; Petersen, M. D.; McNamara, D. E.
2017-12-01
Seismicity in the central United States has dramatically increased since 2008 due to the injection of wastewater produced by oil and gas extraction. In response, the USGS created a one-year probabilistic hazard model and map for 2016 to describe the increased hazard posed to the central and eastern United States. Using the intensity of shaking reported to the "Did You Feel It?" system during 2016, we assess the performance of this model. Assessing the performance of earthquake hazard maps for natural and induced seismicity is conceptually similar but has practical differences. Maps that have return periods of hundreds or thousands of years— as commonly used for natural seismicity— can be assessed using historical intensity data that also span hundreds or thousands of years. Several different features stand out when assessing the USGS 2016 seismic hazard model for the central and eastern United States from induced and natural earthquakes. First, the model can be assessed as a forecast in one year, because event rates are sufficiently high to permit evaluation with one year of data. Second, because these models are projections from the previous year thus implicitly assuming that fluid injection rates remain the same, misfit may reflect changes in human activity. Our results suggest that the model was very successful by the metric implicit in probabilistic hazard seismic assessment: namely, that the fraction of sites at which the maximum shaking exceeded the mapped value is comparable to that expected. The model also did well by a misfit metric that compares the spatial patterns of predicted and maximum observed shaking. This was true for both the central and eastern United States as a whole, and for the region within it with the highest amount of seismicity, Oklahoma and its surrounding area. The model performed least well in northern Texas, over-stating hazard, presumably because lower oil and gas prices and regulatory action reduced the water injection volume
Towards a high performance vertex detector based on 3D integration of deep N-well MAPS
International Nuclear Information System (INIS)
Re, V
2010-01-01
The development of deep N-Well (DNW) CMOS active pixel sensors was driven by the ambitious goal of designing a monolithic device with similar functionalities as in hybrid pixel readout chips, such as pixel-level sparsification and time stamping. The implementation of the DNW MAPS concept in a 3D vertical integration process naturally leads the designer towards putting more intelligence in the chip and in the pixels themselves, achieving novel device structures based on the interconnection of two or more layers fabricated in the same technology. These devices are read out with a data-push scheme that makes it possible to use pixel data for the generation of a flexible level 1 track trigger, based on associative memories, with short latency and high efficiency. This paper gives an update of the present status of DNW MAPS design in both 2D and 3D versions, and presents a discussion of the architectures that are being devised for the Layer 0 of the SuperB Silicon Vertex Tracker.
Towards a high performance vertex detector based on 3D integration of deep N-well MAPS
Energy Technology Data Exchange (ETDEWEB)
Re, V, E-mail: valerio.re@unibg.i [University of Bergamo, Department of Industrial Engineering, Viale Marconi 5, 24044 Dalmine (Italy)
2010-06-15
The development of deep N-Well (DNW) CMOS active pixel sensors was driven by the ambitious goal of designing a monolithic device with similar functionalities as in hybrid pixel readout chips, such as pixel-level sparsification and time stamping. The implementation of the DNW MAPS concept in a 3D vertical integration process naturally leads the designer towards putting more intelligence in the chip and in the pixels themselves, achieving novel device structures based on the interconnection of two or more layers fabricated in the same technology. These devices are read out with a data-push scheme that makes it possible to use pixel data for the generation of a flexible level 1 track trigger, based on associative memories, with short latency and high efficiency. This paper gives an update of the present status of DNW MAPS design in both 2D and 3D versions, and presents a discussion of the architectures that are being devised for the Layer 0 of the SuperB Silicon Vertex Tracker.
International Nuclear Information System (INIS)
Lu Li; Yang Yiren
2009-01-01
The responses and limit cycle flutter of a plate-type structure with cubic stiffness in viscous flow were studied. The continuous system was dispersed by utilizing Galerkin Method. The equivalent linearization concept was performed to predict the ranges of limit cycle flutter velocities. The coupled map of flutter amplitude-equivalent linear stiffness-critical velocity was used to analyze the stability of limit cycle flutter. The theoretical results agree well with the results of numerical integration, which indicates that the equivalent linearization concept is available to the analysis of limit cycle flutter of plate-type structure. (authors)
Directory of Open Access Journals (Sweden)
Ivonne Burguet Lago
2018-05-01
Full Text Available ABSTRACT The paper describes a proposal of professional pedagogical performance tests to assess teachers’ role in the process of developing the skill of working with algorithms in Linear Algebra. It aims at devising a testing tool to assess teachers’ performance in the skill-developing process. This tool is a finding of Cuba theory of Advanced Education, systematically used in recent years. The findings include the test design and the illustration of its use in a sample of 22 Linear Algebra teachers during the first term of the 2017-2018 academic year at Informatics Sciences Engineering major. Keywords: ABSTRACT The paper describes a proposal of professional pedagogical performance tests to assess teachers’ role in the process of developing the skill of working with algorithms in Linear Algebra. It aims at devising a testing tool to assess teachers’ performance in the skill-developing process. This tool is a finding of Cuba theory of Advanced Education, systematically used in recent years. The findings include the test design and the illustration of its use in a sample of 22 Linear Algebra teachers during the first term of the 2017-2018 academic year at Informatics Sciences Engineering major.
Bijlsma, L.; Emke, E.; Hernández, F.; de Voogt, P.
2013-01-01
This work illustrates the potential of liquid chromatography coupled to a hybrid linear ion trap Fourier Transform Orbitrap mass spectrometer for the simultaneous identification and quantification of 24 drugs of abuse and relevant metabolites in sewage water. The developed methodology consisted of
Hernando, M D; Ferrer, C; Ulaszewska, M; García-Reyes, J F; Molina-Díaz, A; Fernández-Alba, A R
2007-11-01
This article describes the development of an enhanced liquid chromatography-mass spectrometry (LC-MS) method for the analysis of pesticides in olive oil. One hundred pesticides belonging to different classes and that are currently used in agriculture have been included in this method. The LC-MS method was developed using a hybrid quadrupole/linear ion trap (QqQ(LIT)) analyzer. Key features of this technique are the rapid scan acquisition times, high specificity and high sensitivity it enables when the multiple reaction monitoring (MRM) mode or the linear ion-trap operational mode is employed. The application of 5 ms dwell times using a linearly accelerating (LINAC) high-pressure collision cell enabled the analysis of a high number of pesticides, with enough data points acquired for optimal peak definition in MRM operation mode and for satisfactory quantitative determinations to be made. The method quantifies over a linear dynamic range of LOQs (0.03-10 microg kg(-1)) up to 500 microg kg(-1). Matrix effects were evaluated by comparing the slopes of matrix-matched and solvent-based calibration curves. Weak suppression or enhancement of signals was observed (ion (EPI) and MS3 were developed.
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...
Computational linear and commutative algebra
Kreuzer, Martin
2016-01-01
This book combines, in a novel and general way, an extensive development of the theory of families of commuting matrices with applications to zero-dimensional commutative rings, primary decompositions and polynomial system solving. It integrates the Linear Algebra of the Third Millennium, developed exclusively here, with classical algorithmic and algebraic techniques. Even the experienced reader will be pleasantly surprised to discover new and unexpected aspects in a variety of subjects including eigenvalues and eigenspaces of linear maps, joint eigenspaces of commuting families of endomorphisms, multiplication maps of zero-dimensional affine algebras, computation of primary decompositions and maximal ideals, and solution of polynomial systems. This book completes a trilogy initiated by the uncharacteristically witty books Computational Commutative Algebra 1 and 2 by the same authors. The material treated here is not available in book form, and much of it is not available at all. The authors continue to prese...
Superconducting linear colliders
International Nuclear Information System (INIS)
Anon.
1990-01-01
The advantages of superconducting radiofrequency (SRF) for particle accelerators have been demonstrated by successful operation of systems in the TRISTAN and LEP electron-positron collider rings respectively at the Japanese KEK Laboratory and at CERN. If performance continues to improve and costs can be lowered, this would open an attractive option for a high luminosity TeV (1000 GeV) linear collider
Spatial Processes in Linear Ordering
von Hecker, Ulrich; Klauer, Karl Christoph; Wolf, Lukas; Fazilat-Pour, Masoud
2016-01-01
Memory performance in linear order reasoning tasks (A > B, B > C, C > D, etc.) shows quicker, and more accurate responses to queries on wider (AD) than narrower (AB) pairs on a hypothetical linear mental model (A -- B -- C -- D). While indicative of an analogue representation, research so far did not provide positive evidence for spatial…
Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia
2012-04-01
This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with
Directory of Open Access Journals (Sweden)
Loes C J van Dam
2015-03-01
Full Text Available Humans can learn and store multiple visuomotor mappings (dual-adaptation when feedback for each is provided alternately. Moreover, learned context cues associated with each mapping can be used to switch between the stored mappings. However, little is known about the associative learning between cue and required visuomotor mapping, and how learning generalises to novel but similar conditions. To investigate these questions, participants performed a rapid target-pointing task while we manipulated the offset between visual feedback and movement end-points. The visual feedback was presented with horizontal offsets of different amounts, dependent on the targets shape. Participants thus needed to use different visuomotor mappings between target location and required motor response depending on the target shape in order to "hit" it. The target shapes were taken from a continuous set of shapes, morphed between spiky and circular shapes. After training we tested participants performance, without feedback, on different target shapes that had not been learned previously. We compared two hypotheses. First, we hypothesised that participants could (explicitly extract the linear relationship between target shape and visuomotor mapping and generalise accordingly. Second, using previous findings of visuomotor learning, we developed a (implicit Bayesian learning model that predicts generalisation that is more consistent with categorisation (i.e. use one mapping or the other. The experimental results show that, although learning the associations requires explicit awareness of the cues' role, participants apply the mapping corresponding to the trained shape that is most similar to the current one, consistent with the Bayesian learning model. Furthermore, the Bayesian learning model predicts that learning should slow down with increased numbers of training pairs, which was confirmed by the present results. In short, we found a good correspondence between the
Quantum Programs as Kleisli Maps
Directory of Open Access Journals (Sweden)
Abraham Westerbaan
2017-01-01
Full Text Available Furber and Jacobs have shown in their study of quantum computation that the category of commutative C*-algebras and PU-maps (positive linear maps which preserve the unit is isomorphic to the Kleisli category of a comonad on the category of commutative C*-algebras with MIU-maps (linear maps which preserve multiplication, involution and unit. [Furber and Jacobs, 2013] In this paper, we prove a non-commutative variant of this result: the category of C*-algebras and PU-maps is isomorphic to the Kleisli category of a comonad on the subcategory of MIU-maps. A variation on this result has been used to construct a model of Selinger and Valiron's quantum lambda calculus using von Neumann algebras. [Cho and Westerbaan, 2016
Synchronizability of coupled PWL maps
International Nuclear Information System (INIS)
Polynikis, A.; Di Bernardo, M.; Hogan, S.J.
2009-01-01
In this paper we discuss the phenomenon of synchronization of chaotic systems in the case of coupled piecewise linear (PWL) continuous and discontinuous one-dimensional maps. We present numerical results for two examples of coupled systems consisting of two PWL maps. We illustrate how the coupled system can achieve synchronization and discuss the nature of the bifurcation that occurs at a critical value of the coupling strength. We then determine this critical coupling using linear stability analysis. We discuss the effects of variation of the parameters of the PWL maps on the critical coupling and present different bifurcation scenarios obtained for different sets of values of these parameters. Finally, we discuss an extension of our work to the synchronizability of networks consisting of two or more PWL maps. We show how the synchronizability of a network of PWL maps can be improved by tuning the map parameters.
Cler, Meredith J.; Stepp, Cara E.
2015-01-01
Individuals with high spinal cord injuries are unable to operate a keyboard and mouse with their hands. In this experiment, we compared two systems using surface electromyography (sEMG) recorded from facial muscles to control an onscreen keyboard to type five-letter words. Both systems used five sEMG sensors to capture muscle activity during five distinct facial gestures that were mapped to five cursor commands: move left, move right, move up, move down, and “click”. One system used a discrete movement and feedback algorithm in which the user produced one quick facial gesture, causing a corresponding discrete movement to an adjacent letter. The other system was continuously updated and allowed the user to control the cursor’s velocity by relative activation between different sEMG channels. Participants were trained on one system for four sessions on consecutive days, followed by one crossover session on the untrained system. Information transfer rates (ITRs) were high for both systems compared to other potential input modalities, both initially and with training (Session 1: 62.1 bits/min, Session 4: 105.1 bits/min). Users of the continuous system showed significantly higher ITRs than the discrete users. Future development will focus on improvements to both systems, which may offer differential advantages for users with various motor impairments. PMID:25616053
Directory of Open Access Journals (Sweden)
Dong Lu
2015-06-01
Full Text Available Smart cities link the city services, citizens, resource and infrastructures together and form the heart of the modern society. As a “smart” ecosystem, smart cities focus on sustainable growth, efficiency, productivity and environmentally friendly development. By comparing with the European Union, North America and other countries, smart cities in China are still in the preliminary stage. This study offers a comparative analysis of ten smart cities in China on the basis of an extensive database covering two time periods: 2005–2007 and 2008–2010. The unsupervised computational neural network self-organizing map (SOM analysis is adopted to map out the various cities based on their performance. The demonstration effect and mutual influences between these ten smart cities are also discussed by using social network analysis. Based on the smart city performance and cluster network, current problems for smart city development in China were pointed out. Future research directions for smart city research are discussed at the end this paper.
International Nuclear Information System (INIS)
Reena, P.; Pai, Rajeshri; Gupta, Tejpal; Rajeev, S.; Dayananda, S.; Jamema, S.V.; Deepak, D.
2006-01-01
Implementation of step-and-shoot intensity-modulated radiotherapy (IMRT) needs careful understanding of the accelerator start-up characteristic to ensure accurate and precise delivery of radiation dose to patient. The dosimetric characteristic of a Siemens Primus linear accelerator (LA) which delivers 6 and 18 MV x-rays at the dose rate of 300 and 500 monitor unit (MU) per minutes (min) respectively was studied under the condition of small MU ranging from 1 to 100. Dose monitor linearity was studied at different dose calibration parameter (D1 C O) by measuring ionization at 10 cm depth in a solid water phantom using a 0.6 cc ionization chamber. Monitor unit stability was studied from different intensity modulated (IM) groups comprising various combinations of MU per field and number of fields. Stability of beam flatness and symmetry was investigated under normal and IMRT mode for 20x20 cm 2 field under small MU using a 2D Profiler kept isocentrically at 5 cm depth. Inter segment response was investigated form 1 to 10 MU by measuring the dose per MU from various IM groups, each consisting of four segments with inter-segment separation of 2 cm. In the range 1-4 MU, the dose linearity error was more than 5% (max -32% at 1 MU) for 6 MV x-rays at factory calibrated D1 C O value of 6000. The dose linearity error was reduced to -10.95% at 1 MU, within -3% for 2 and 3 MU and ± 1% for MU ≥4 when the D1 C O was subsequently tuned at 4500. For 18 MV x-rays, the dose linearity error at factory calibrated D1 C O value of 4400 was within ± 1% for MU ≥3 with maximum of -13.5 observed at 1 MU. For both the beam energies and MU/field ≥4, the stability of monitor unit tested for different IM groups was within ± 1% of the dose from the normal treatment field. This variation increases to -2.6% for 6 MV and -2.7% for 18 MV x-rays for 2 MU/field. No significant variation was observed in the stability of beam profile measured from normal and IMRT mode. The beam flatness was
Directory of Open Access Journals (Sweden)
Reena P
2006-01-01
Full Text Available Implementation of step-and-shoot intensity-modulated radiotherapy (IMRT needs careful understanding of the accelerator start-up characteristic to ensure accurate and precise delivery of radiation dose to patient. The dosimetric characteristic of a Siemens Primus linear accelerator (LA which delivers 6 and 18 MV x-rays at the dose rate of 300 and 500 monitor unit (MU per minutes (min respectively was studied under the condition of small MU ranging from 1 to 100. Dose monitor linearity was studied at different dose calibration parameter (D1_C0 by measuring ionization at 10 cm depth in a solid water phantom using a 0.6 cc ionization chamber. Monitor unit stability was studied from different intensity modulated (IM groups comprising various combinations of MU per field and number of fields. Stability of beam flatness and symmetry was investigated under normal and IMRT mode for 20x20 cm2 field under small MU using a 2D Profiler kept isocentrically at 5 cm depth. Inter segment response was investigated form 1 to 10 MU by measuring the dose per MU from various IM groups, each consisting of four segments with inter-segment separation of 2 cm. In the range 1-4 MU, the dose linearity error was more than 5% (max -32% at 1 MU for 6 MV x-rays at factory calibrated D1_C0 value of 6000. The dose linearity error was reduced to -10.95% at 1 MU, within -3% for 2 and 3 MU and ±1% for MU ≥4 when the D1_C0 was subsequently tuned at 4500. For 18 MV x-rays, the dose linearity error at factory calibrated D1_C0 value of 4400 was within ±1% for MU ≥ 3 with maximum of -13.5 observed at 1 MU. For both the beam energies and MU/field ≥ 4, the stability of monitor unit tested for different IM groups was within ±1% of the dose from the normal treatment field. This variation increases to -2.6% for 6 MV and -2.7% for 18 MV x-rays for 2 MU/field. No significant variation was observed in the stability of beam profile measured from normal and IMRT mode. The beam flatness
International Nuclear Information System (INIS)
Emma, P.
1995-01-01
The Stanford Linear Collider (SLC) is the first and only high-energy e + e - linear collider in the world. Its most remarkable features are high intensity, submicron sized, polarized (e - ) beams at a single interaction point. The main challenges posed by these unique characteristics include machine-wide emittance preservation, consistent high intensity operation, polarized electron production and transport, and the achievement of a high degree of beam stability on all time scales. In addition to serving as an important machine for the study of Z 0 boson production and decay using polarized beams, the SLC is also an indispensable source of hands-on experience for future linear colliders. Each new year of operation has been highlighted with a marked improvement in performance. The most significant improvements for the 1994-95 run include new low impedance vacuum chambers for the damping rings, an upgrade to the optics and diagnostics of the final focus systems, and a higher degree of polarization from the electron source. As a result, the average luminosity has nearly doubled over the previous year with peaks approaching 10 30 cm -2 s -1 and an 80% electron polarization at the interaction point. These developments as well as the remaining identifiable performance limitations will be discussed
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
The International Linear Collider
Directory of Open Access Journals (Sweden)
List Benno
2014-04-01
Full Text Available The International Linear Collider (ILC is a proposed e+e− linear collider with a centre-of-mass energy of 200–500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.
The International Linear Collider
List, Benno
2014-04-01
The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.
International Nuclear Information System (INIS)
Phinney, N.
1992-01-01
The SLAC Linear Collider has begun a new era of operation with the SLD detector. During 1991 there was a first engineering run for the SLD in parallel with machine improvements to increase luminosity and reliability. For the 1992 run, a polarized electron source was added and more than 10,000 Zs with an average of 23% polarization have been logged by the SLD. This paper discusses the performance of the SLC in 1991 and 1992 and the technical advances that have produced higher luminosity. Emphasis will be placed on issues relevant to future linear colliders such as producing and maintaining high current, low emittance beams and focusing the beams to the micron scale for collisions. (Author) tab., 2 figs., 18 refs
Miniature linear cooler development
International Nuclear Information System (INIS)
Pruitt, G.R.
1993-01-01
An overview is presented of the status of a family of miniature linear coolers currently under development by Hughes Aircraft Co. for use in hand held, volume limited or power limited infrared applications. These coolers, representing the latest additions to the Hughes family of TOP trademark [twin-opposed piston] linear coolers, have been fabricated and tested in three different configurations. Each configuration is designed to utilize a common compressor assembly resulting in reduced manufacturing costs. The baseline compressor has been integrated with two different expander configurations and has been operated with two different levels of input power. These various configuration combinations offer a wide range of performance and interface characteristics which may be tailored to applications requiring limited power and size without significantly compromising cooler capacity or cooldown characteristics. Key cooler characteristics and test data are summarized for three combinations of cooler configurations which are representative of the versatility of this linear cooler design. Configurations reviewed include the shortened coldfinger [1.50 to 1.75 inches long], limited input power [less than 17 Watts] for low power availability applications; the shortened coldfinger with higher input power for lightweight, higher performance applications; and coldfingers compatible with DoD 0.4 Watt Common Module coolers for wider range retrofit capability. Typical weight of these miniature linear coolers is less than 500 grams for the compressor, expander and interconnecting transfer line. Cooling capacity at 80K at room ambient conditions ranges from 400 mW to greater than 550 mW. Steady state power requirements for maintaining a heat load of 150 mW at 80K has been shown to be less than 8 Watts. Ongoing reliability growth testing is summarized including a review of the latest test article results
Stochastic development regression on non-linear manifolds
DEFF Research Database (Denmark)
Kühnel, Line; Sommer, Stefan Horst
2017-01-01
We introduce a regression model for data on non-linear manifolds. The model describes the relation between a set of manifold valued observations, such as shapes of anatomical objects, and Euclidean explanatory variables. The approach is based on stochastic development of Euclidean diffusion...... processes to the manifold. Defining the data distribution as the transition distribution of the mapped stochastic process, parameters of the model, the non-linear analogue of design matrix and intercept, are found via maximum likelihood. The model is intrinsically related to the geometry encoded...... in the connection of the manifold. We propose an estimation procedure which applies the Laplace approximation of the likelihood function. A simulation study of the performance of the model is performed and the model is applied to a real dataset of Corpus Callosum shapes....
The research of radar target tracking observed information linear filter method
Chen, Zheng; Zhao, Xuanzhi; Zhang, Wen
2018-05-01
Aiming at the problems of low precision or even precision divergent is caused by nonlinear observation equation in radar target tracking, a new filtering algorithm is proposed in this paper. In this algorithm, local linearization is carried out on the observed data of the distance and angle respectively. Then the kalman filter is performed on the linearized data. After getting filtered data, a mapping operation will provide the posteriori estimation of target state. A large number of simulation results show that this algorithm can solve above problems effectively, and performance is better than the traditional filtering algorithm for nonlinear dynamic systems.
Linearized models for a new magnetic control in MAST
Energy Technology Data Exchange (ETDEWEB)
Artaserse, G., E-mail: giovanni.artaserse@enea.it [Associazione Euratom-ENEA sulla Fusione, Via Enrico Fermi 45, I-00044 Frascati (RM) (Italy); Maviglia, F.; Albanese, R. [Associazione Euratom-ENEA-CREATE sulla Fusione, Via Claudio 21, I-80125 Napoli (Italy); McArdle, G.J.; Pangione, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom)
2013-10-15
Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops.
Linearized models for a new magnetic control in MAST
International Nuclear Information System (INIS)
Artaserse, G.; Maviglia, F.; Albanese, R.; McArdle, G.J.; Pangione, L.
2013-01-01
Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops