Dimler, Frank; Fechner, Susanne; Rodenberg, Alexander; Brixner, Tobias; Tannor, David J
2009-01-01
We recently introduced the von Neumann picture, a joint time-frequency representation, for describing ultrashort laser pulses. The method exploits a discrete phase-space lattice of nonorthogonal Gaussians to represent the pulses; an arbitrary pulse shape can be represented on this lattice in a one-to-one manner. Although the representation was originally defined for signals with an infinite continuous spectrum, it can be adapted to signals with discrete and finite spectrum with great computational savings, provided that discretization and truncation effects are handled with care. In this paper, we present three methods that avoid loss of accuracy due to these effects. The approach has immediate application to the representation and manipulation of femtosecond laser pulses produced by a liquid-crystal mask with a discrete and finite number of pixels.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
Accurate metacognition for visual sensory memory representations
Vandenbroucke, A.R.E.; Sligte, I.G.; Barrett, A.B.; Seth, A.K.; Fahrenfort, J.J.; Lamme, V.A.F.
2014-01-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the
Efficient Interaction Recognition through Positive Action Representation
Tao Hu
2013-01-01
Full Text Available This paper proposes a novel approach to decompose two-person interaction into a Positive Action and a Negative Action for more efficient behavior recognition. A Positive Action plays the decisive role in a two-person exchange. Thus, interaction recognition can be simplified to Positive Action-based recognition, focusing on an action representation of just one person. Recently, a new depth sensor has become widely available, the Microsoft Kinect camera, which provides RGB-D data with 3D spatial information for quantitative analysis. However, there are few publicly accessible test datasets using this camera, to assess two-person interaction recognition approaches. Therefore, we created a new dataset with six types of complex human interactions (i.e., named K3HI, including kicking, pointing, punching, pushing, exchanging an object, and shaking hands. Three types of features were extracted for each Positive Action: joint, plane, and velocity features. We used continuous Hidden Markov Models (HMMs to evaluate the Positive Action-based interaction recognition method and the traditional two-person interaction recognition approach with our test dataset. Experimental results showed that the proposed recognition technique is more accurate than the traditional method, shortens the sample training time, and therefore achieves comprehensive superiority.
Indexed variation graphs for efficient and accurate resistome profiling.
Rowe, Will P M; Winn, Martyn D
2018-05-14
Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.
Fast and accurate grid representations for atom-based docking with partner flexibility.
de Vries, Sjoerd J; Zacharias, Martin
2017-06-30
Macromolecular docking methods can broadly be divided into geometric and atom-based methods. Geometric methods use fast algorithms that operate on simplified, grid-like molecular representations, while atom-based methods are more realistic and flexible, but far less efficient. Here, a hybrid approach of grid-based and atom-based docking is presented, combining precalculated grid potentials with neighbor lists for fast and accurate calculation of atom-based intermolecular energies and forces. The grid representation is compatible with simultaneous multibody docking and can tolerate considerable protein flexibility. When implemented in our docking method ATTRACT, grid-based docking was found to be ∼35x faster. With the OPLSX forcefield instead of the ATTRACT coarse-grained forcefield, the average speed improvement was >100x. Grid-based representations may allow atom-based docking methods to explore large conformational spaces with many degrees of freedom, such as multiple macromolecules including flexibility. This increases the domain of biological problems to which docking methods can be applied. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
An accurate and precise representation of drug ingredients.
Hanna, Josh; Bian, Jiang; Hogan, William R
2016-01-01
In previous work, we built the Drug Ontology (DrOn) to support comparative effectiveness research use cases. Here, we have updated our representation of ingredients to include both active ingredients (and their strengths) and excipients. Our update had three primary lines of work: 1) analysing and extracting excipients, 2) analysing and extracting strength information for active ingredients, and 3) representing the binding of active ingredients to cytochrome P450 isoenzymes as substrates and inhibitors of those enzymes. To properly differentiate between excipients and active ingredients, we conducted an ontological analysis of the roles that various ingredients, including excipients, have in drug products. We used the value specification model of the Ontology for Biomedical Investigations to represent strengths of active ingredients and then analyzed RxNorm to extract excipient and strength information and modeled them according to the results of our analysis. We also analyzed and defined dispositions of molecules used in aggregate as active ingredients to bind cytochrome P450 isoenzymes. Our analysis of excipients led to 17 new classes representing the various roles that excipients can bear. We then extracted excipients from RxNorm and added them to DrOn for branded drugs. We found excipients for 5,743 branded drugs, covering ~27% of the 21,191 branded drugs in DrOn. Our analysis of active ingredients resulted in another new class, active ingredient role. We also extracted strengths for all types of tablets, capsules, and caplets, resulting in strengths for 5,782 drug forms, covering ~41% of the 14,035 total drug forms and accounting for ~97 % of the 5,970 tablets, capsules, and caplets in DrOn. We represented binding-as-substrate and binding-as-inhibitor dispositions to two cytochrome P450 (CYP) isoenzymes (CYP2C19 and CYP2D6) and linked these dispositions to 65 compounds. It is now possible to query DrOn automatically for all drug products that contain active
Efficient Representation of Timed UML 2 Interactions
Knapp, Alexander; Störrle, Harald
2014-01-01
UML 2 interactions describe system behavior over time in a declarative way. The standard approach to defining their formal semantics enumerates traces of events; other representation formats, like Büchi automata or prime event structures, have been suggested, too. We describe another, more succin...
Efficient free-form surface representation with application in orthodontics
Yamany, Sameh M.; El-Bialy, Ahmed M.
1999-03-01
Orthodontics is the branch of dentistry concerned with the study of growth of the craniofacial complex. The detection and correction of malocclusion and other dental abnormalities is one of the most important and critical phases of orthodontic diagnosis. This paper introduces a system that can assist in automatic orthodontics diagnosis. The system can be used to classify skeletal and dental malocclusion from a limited number of measurements. This system is not intended to deal with several cases but is aimed at cases more likely to be encountered in epidemiological studies. Prior to the measurement of the orthodontics parameters, the position of the teeth in the jaw model must be detected. A new free-form surface representation is adopted for the efficient and accurate segmentation and separation of teeth from a scanned jaw model. THe new representation encodes the curvature and surface normal information into a 2D image. Image segmentation tools are then sued to extract structures of high/low curvature. By iteratively removing these structures, individual teeth surfaces are obtained.
Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations
Sicat, Ronell B.
2015-11-25
and voxel footprints in input images and volumes. We show that the continuous pdfs encoded in the sparse pdf map representation enable accurate multi-resolution non-linear image operations on gigapixel images. Similarly, we show that sparse pdf volumes enable more consistent multi-resolution volume rendering compared to standard approaches, on both artificial and real world large-scale volumes. The supplementary videos demonstrate our results. In the standard approach, users heavily rely on panning and zooming interactions to navigate the data within the limits of their display devices. However, panning across the whole spatial domain and zooming across all resolution levels of large-scale images to search for interesting regions is not practical. Assisted exploration techniques allow users to quickly narrow down millions to billions of possible regions to a more manageable number for further inspection. However, existing approaches are not fully user-driven because they typically already prescribe what being of interest means. To address this, we introduce the patch sets representation for large-scale images. Patches inside a patch set are grouped and encoded according to similarity via a permutohedral lattice (p-lattice) in a user-defined feature space. Fast set operations on p-lattices facilitate patch set queries that enable users to describe what is interesting. In addition, we introduce an exploration framework—GigaPatchExplorer—for patch set-based image exploration. We show that patch sets in our framework are useful for a variety of user-driven exploration tasks in gigapixel images and whole collections thereof.
Representation learning with deep extreme learning machines for efficient image set classification
Uzair, Muhammad
2016-12-09
Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.
Representation learning with deep extreme learning machines for efficient image set classification
Uzair, Muhammad; Shafait, Faisal; Ghanem, Bernard; Mian, Ajmal
2016-01-01
Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.
Efficient processing of fluorescence images using directional multiscale representations.
Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M
2014-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.
Accurate and efficient spin integration for particle accelerators
Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.
2015-01-01
Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.
Accurate and efficient spin integration for particle accelerators
Abell, Dan T.; Meiser, Dominic [Tech-X Corporation, Boulder, CO (United States); Ranjbar, Vahid H. [Brookhaven National Laboratory, Upton, NY (United States); Barber, Desmond P. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2015-01-15
Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.
Accurate and efficient spin integration for particle accelerators
Dan T. Abell
2015-02-01
Full Text Available Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.
Accurate and efficient calculation of response times for groundwater flow
Carr, Elliot J.; Simpson, Matthew J.
2018-03-01
We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.
Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations
Sicat, Ronell Barrera
2015-01-01
approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i
Accurate and efficient quadrature for volterra integral equations
Knirk, D.L.
1976-01-01
Four quadrature schemes were tested and compared in considerable detail to determine their usefulness in the noniterative integral equation method for single-channel quantum-mechanical calculations. They are two forms of linear approximation (trapezoidal rule) and two forms of quadratic approximation (Simpson's rule). Their implementation in this method is shown, a formal discussion of error propagation is given, and tests are performed to determine actual operating characteristics on various bound and scattering problems in different potentials. The quadratic schemes are generally superior to the linear ones in terms of accuracy and efficiency. The previous implementation of Simpson's rule is shown to possess an inherent instability which requires testing on each problem for which it is used to assure its reliability. The alternative quadratic approximation does not suffer this deficiency, but still enjoys the advantages of higher order. In addition, the new scheme obeys very well an h 4 Richardson extrapolation, whereas the old one does so rather poorly. 6 figures, 11 tables
Integer Representations towards Efficient Counting in the Bit Probe Model
Brodal, Gerth Stølting; Greve, Mark; Pandey, Vineet
2011-01-01
Abstract We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both...... in the worst-case and in the average-case. A counter is space-optimal if it represents any number in the range [0,...,2 n − 1] using exactly n bits. We provide a space-optimal counter which supports increment and decrement operations by reading at most n − 1 bits and writing at most 3 bits in the worst......-case. To the best of our knowledge, this is the first such representation which supports these operations by always reading strictly less than n bits. For redundant counters where we only need to represent numbers in the range [0,...,L] for some integer L bits, we define the efficiency...
2011-01-01
Background Over the past several centuries, chemistry has permeated virtually every facet of human lifestyle, enriching fields as diverse as medicine, agriculture, manufacturing, warfare, and electronics, among numerous others. Unfortunately, application-specific, incompatible chemical information formats and representation strategies have emerged as a result of such diverse adoption of chemistry. Although a number of efforts have been dedicated to unifying the computational representation of chemical information, disparities between the various chemical databases still persist and stand in the way of cross-domain, interdisciplinary investigations. Through a common syntax and formal semantics, Semantic Web technology offers the ability to accurately represent, integrate, reason about and query across diverse chemical information. Results Here we specify and implement the Chemical Entity Semantic Specification (CHESS) for the representation of polyatomic chemical entities, their substructures, bonds, atoms, and reactions using Semantic Web technologies. CHESS provides means to capture aspects of their corresponding chemical descriptors, connectivity, functional composition, and geometric structure while specifying mechanisms for data provenance. We demonstrate that using our readily extensible specification, it is possible to efficiently integrate multiple disparate chemical data sources, while retaining appropriate correspondence of chemical descriptors, with very little additional effort. We demonstrate the impact of some of our representational decisions on the performance of chemically-aware knowledgebase searching and rudimentary reaction candidate selection. Finally, we provide access to the tools necessary to carry out chemical entity encoding in CHESS, along with a sample knowledgebase. Conclusions By harnessing the power of Semantic Web technologies with CHESS, it is possible to provide a means of facile cross-domain chemical knowledge integration with full
Bauer, Sebastian; Mathias, Gerald; Tavan, Paul
2014-03-14
We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by
Bauer, Sebastian; Mathias, Gerald; Tavan, Paul, E-mail: paul.tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludwig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)
2014-03-14
We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ{sub i} of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ{sub i}. A summarizing discussion highlights the achievements of the new theory and of its approximate solution
Młynarski, Wiktor
2014-01-01
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644
Wiktor eMlynarski
2014-03-01
Full Text Available To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficientcoding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.
Molto-Caracena, T.; Vendrell Vidal, E.; Goncalves, J.G.M.; Peerani, P.; )
2015-01-01
Virtual Reality (VR) technologies have much potential for training applications. Success relies on the capacity to provide a real-time immersive effect to a trainee. For a training application to be an effective/meaningful tool, 3D realistic scenarios are not enough. Indeed, it is paramount having sufficiently accurate models of the behaviour of the instruments to be used by a trainee. This will enable the required level of user's interactivity. Specifically, when dealing with simulation of radioactive sources, a VR model based application must compute the dose rate with equivalent accuracy and in about the same time as a real instrument. A conflicting requirement is the need to provide a smooth visual rendering enabling spatial interactivity and interaction. This paper presents a VR based prototype which accurately computes the dose rate of radioactive and nuclear sources that can be selected from a wide library. Dose measurements reflect local conditions, i.e., presence of (a) shielding materials with any shape and type and (b) sources with any shape and dimension. Due to a novel way of representing radiation sources, the system is fast enough to grant the necessary user interactivity. The paper discusses the application of this new method and its advantages in terms of time setting, cost and logistics. (author)
Elkbuli, Adel; Godelman, Steven; Miller, Ashley; Boneva, Dessy; Bernal, Eileen; Hai, Shaikh; McKenney, Mark
2018-05-01
Clinical documentation can be an underappreciated. Trauma Centers (TCs) are now routinely evaluated for quality performance. TCs with poor documentation may not accurately reflect actual injury burden or comorbidities and can impact accuracy of mortality measures. Markers exist to adjust crude death rates for injury severity: observed over expected deaths (O/E) adjust for injury; Case Mix Index (CMI) reflects disease burden, and Severity of Illness (SOI) measures organ dysfunction. We aim to evaluate the impact of implementing a Clinical Documentation Improvement Program (CDIP) on reported outcomes. Review of 2-years of prospectively collected data for trauma patients, during the implementation of CDIP. A two-group prospective observational study design was used to evaluate the pre-implementation and the post-implementation phase of improved clinical documentation. T-test and Chi-Squared were used with significance defined as p deaths out of 1419 (3.45%), while post-implementation period, had 38 deaths out of 1454 (2.61%), (non-significant). There was however, a significant difference between O/E ratios. In the pre-phase, the O/E was 1.36 and 0.70 in the post-phase (p < 0.001). The two groups also differed on CMI with a pre-group mean of 2.48 and a post-group of 2.87 (p < 0.001), indicating higher injury burden in the post-group. SOI started at 2.12 and significantly increased to 2.91, signifying more organ system dysfunction (p < 0.018). Improved clinical documentation results in improved accuracy of measures of mortality, injury severity, and comorbidities and a more accurate reflection in O/E mortality ratios, CMI, and SOI. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Time efficient signed Vedic multiplier using redundant binary representation
Ranjan Kumar Barik
2017-03-01
Full Text Available This study presents a high-speed signed Vedic multiplier (SVM architecture using redundant binary (RB representation in Urdhva Tiryagbhyam (UT sutra. This is the first ever effort towards extension of Vedic algorithms to the signed numbers. The proposed multiplier architecture solves the carry propagation issue in UT sutra, as carry free addition is possible in RB representation. The proposed design is coded in VHDL and synthesised in Xilinx ISE 14.4 of various FPGA devices. The proposed SVM architecture has better speed performances as compared with various state-of-the-art conventional as well as Vedic architectures.
Issack, Bilkiss B; Roy, Pierre-Nicholas
2005-08-22
An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.
An efficient and accurate 3D displacements tracking strategy for digital volume correlation
Pan, Bing
2014-07-01
Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost. © 2014 Elsevier Ltd.
An efficient and accurate 3D displacements tracking strategy for digital volume correlation
Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles
2014-07-01
Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.
Defect correction and multigrid for an efficient and accurate computation of airfoil flows
Koren, B.
1988-01-01
Results are presented for an efficient solution method for second-order accurate discretizations of the 2D steady Euler equations. The solution method is based on iterative defect correction. Several schemes are considered for the computation of the second-order defect. In each defect correction
An efficient and accurate method for calculating nonlinear diffraction beam fields
Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)
2016-04-15
This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.
Little, Daniel
2006-01-01
...). The reason this is so is due to hierarchies that we take for granted. By hierarchies I mean that there is a layer of representation of us as individuals, as military professional, as members of a military unit and as citizens of an entire nation...
2006-09-01
two weeks to arrive. Source: http://beergame.mit.edu/ Permission Granted – MIT Supply Chain Forum 2005 Professor Sterman –Sloan School of...Management - MITSource: http://web.mit.edu/jsterman/www/ SDG /beergame.html Rules of Engagement The MIT Beer Game Simulation 04-04 Slide Number 10 Professor...Sterman –Sloan School of Management - MITSource: http://web.mit.edu/jsterman/www/ SDG /beergame.html What is the Significance of Representation
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
Nakhleh, Luay
2014-03-12
I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbial genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.
Heng-Yi Su
2016-11-01
Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.
BLESS 2: accurate, memory-efficient and fast error correction method.
Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming
2016-08-01
The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows
Liang, Tengfei
2013-01-01
Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.
Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters
Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.
2004-12-01
Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6
Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar
2012-01-01
An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...
Efficient representation of DNA data for pattern recognition using failure factor oracles
Cleophas, Loek; Kourie, Derrick G.; Watson, Bruce W.
2013-01-01
In indexing of and pattern matching on DNA sequences, representing all factors of a sequence is important. One efficient, compact representation is the factor oracle (FO). At the same time, any classical deterministic finite automata (DFA) can be transformed to a so-called failure one (FDFA), which
HWNet v2: An Efficient Word Image Representation for Handwritten Documents
Krishnan, Praveen; Jawahar, C. V.
2018-01-01
We present a framework for learning efficient holistic representation for handwritten word images. The proposed method uses a deep convolutional neural network with traditional classification loss. The major strengths of our work lie in: (i) the efficient usage of synthetic data to pre-train a deep network, (ii) an adapted version of ResNet-34 architecture with region of interest pooling (referred as HWNet v2) which learns discriminative features with variable sized word images, and (iii) rea...
An accurate and efficient method for large-scale SSR genotyping and applications.
Li, Lun; Fang, Zhiwei; Zhou, Junfei; Chen, Hong; Hu, Zhangfeng; Gao, Lifen; Chen, Lihong; Ren, Sheng; Ma, Hongyu; Lu, Long; Zhang, Weixiong; Peng, Hai
2017-06-02
Accurate and efficient genotyping of simple sequence repeats (SSRs) constitutes the basis of SSRs as an effective genetic marker with various applications. However, the existing methods for SSR genotyping suffer from low sensitivity, low accuracy, low efficiency and high cost. In order to fully exploit the potential of SSRs as genetic marker, we developed a novel method for SSR genotyping, named as AmpSeq-SSR, which combines multiplexing polymerase chain reaction (PCR), targeted deep sequencing and comprehensive analysis. AmpSeq-SSR is able to genotype potentially more than a million SSRs at once using the current sequencing techniques. In the current study, we simultaneously genotyped 3105 SSRs in eight rice varieties, which were further validated experimentally. The results showed that the accuracies of AmpSeq-SSR were nearly 100 and 94% with a single base resolution for homozygous and heterozygous samples, respectively. To demonstrate the power of AmpSeq-SSR, we adopted it in two applications. The first was to construct discriminative fingerprints of the rice varieties using 3105 SSRs, which offer much greater discriminative power than the 48 SSRs commonly used for rice. The second was to map Xa21, a gene that confers persistent resistance to rice bacterial blight. We demonstrated that genome-scale fingerprints of an organism can be efficiently constructed and candidate genes, such as Xa21 in rice, can be accurately and efficiently mapped using an innovative strategy consisting of multiplexing PCR, targeted sequencing and computational analysis. While the work we present focused on rice, AmpSeq-SSR can be readily extended to animals and micro-organisms. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.
Sun, Jianwei; Remsing, Richard C; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L; Perdew, John P
2016-09-01
One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.
Miyahara, H.; Mori, C.; Fleming, R.F.; Dewaraja, Y.K.
1997-01-01
When quantitative measurements of γ-rays using High-Purity Ge (HPGe) detectors are made for a variety of applications, accurate knowledge of oy-ray detection efficiency is required. The emission rates of γ-rays from sources can be determined quickly in the case that the absolute peak efficiency is calibrated. On the other hand, the relative peak efficiencies can be used for determination of intensity ratios for plural samples and for comparison to the standard source. Thus, both absolute and relative detection efficiencies are important in use of γ-ray detector. The objective of this work is to determine the relative gamma-ray peak detection efficiency for an HPGe detector with the uncertainty approaching 0.1% . We used some nuclides which emit at least two gamma-rays with energies from 700 to 2400 keV for which the relative emission probabilities are known with uncertainties much smaller than 0.1%. The relative peak detection efficiencies were calculated from the measurements of the nuclides, 46 Sc, 48 Sc, 60 Co and 94 Nb, emitting two γ- rays with the emission probabilities of almost unity. It is important that various corrections for the emission probabilities, the cascade summing effect, and the self-absorption are small. A third order polynomial function on both logarithmic scales of energy and efficiency was fitted to the data, and the peak efficiency predicted at certain energy from covariance matrix showed the uncertainty less than 0.5% except for near 700 keV. As an application, the emission probabilities of the 1037.5 and 1212.9 keV γ-rays for 48 Sc were determined using the function of the highly precise relative peak efficiency. Those were 0.9777+0,.00079 and 0.02345+0.00017 for the 1037.5 and 1212.9 keV γ-rays, respectively. The sum of these probabilities is close to unity within the uncertainty which means that the certainties of the results are high and the accuracy has been improved considerably
A network of spiking neurons for computing sparse representations in an energy-efficient way.
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B
2012-11-01
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Zhu Xiangyang
2017-01-01
Full Text Available With the development of cloud computing, services outsourcing in clouds has become a popular business model. However, due to the fact that data storage and computing are completely outsourced to the cloud service provider, sensitive data of data owners is exposed, which could bring serious privacy disclosure. In addition, some unexpected events, such as software bugs and hardware failure, could cause incomplete or incorrect results returned from clouds. In this paper, we propose an efficient and accurate verifiable privacy-preserving multikeyword text search over encrypted cloud data based on hierarchical agglomerative clustering, which is named MUSE. In order to improve the efficiency of text searching, we proposed a novel index structure, HAC-tree, which is based on a hierarchical agglomerative clustering method and tends to gather the high-relevance documents in clusters. Based on the HAC-tree, a noncandidate pruning depth-first search algorithm is proposed, which can filter the unqualified subtrees and thus accelerate the search process. The secure inner product algorithm is used to encrypted the HAC-tree index and the query vector. Meanwhile, a completeness verification algorithm is given to verify search results. Experiment results demonstrate that the proposed method outperforms the existing works, DMRS and MRSE-HCI, in efficiency and accuracy, respectively.
An Accurate and Efficient Design Tool for Large Contoured Beam Reflectarrays
Zhou, Min; Sørensen, Stig B.; Jørgensen, Erik
2012-01-01
An accurate and efficient tool for the design of contoured beam reflectarrays is presented. It is based on the Spectral Domain Method of Moments, the Local Periodicity approach, and a minimax optimization algorithm. Contrary to the conventional phase-only optimization techniques, the geometrical...... parameters of the array elements are directly optimized to fulfill the far-field requirements. The design tool can be used to optimize reflectarrays based on a regular grid as well as an irregular grid. Both coand cross-polar radiation can be optimized for multiple frequencies, polarizations, and feed...... illuminations. Two offset contoured beam reflectarrays that radiate a highgain beam on an European coverage have been designed, manufactured, and measured at the DTU-ESA Spherical Near-Field Antenna Test Facility. An excellent agreement is obtained for the simulated and measured patterns. To show the design...
Ma, Duancheng; Friák, Martin; Pezold, Johann von; Raabe, Dierk; Neugebauer, Jörg
2015-01-01
We propose an approach for the computationally efficient and quantitatively accurate prediction of solid-solution strengthening. It combines the 2-D Peierls–Nabarro model and a recently developed solid-solution strengthening model. Solid-solution strengthening is examined with Al–Mg and Al–Li as representative alloy systems, demonstrating a good agreement between theory and experiments within the temperature range in which the dislocation motion is overdamped. Through a parametric study, two guideline maps of the misfit parameters against (i) the critical resolved shear stress, τ 0 , at 0 K and (ii) the energy barrier, ΔE b , against dislocation motion in a solid solution with randomly distributed solute atoms are created. With these two guideline maps, τ 0 at finite temperatures is predicted for other Al binary systems, and compared with available experiments, achieving good agreement
Hursin, Mathieu; Xiao Shanjie; Jevremovic, Tatjana
2006-01-01
This paper summarizes the theoretical and numerical aspects of the AGENT code methodology accurately applied for detailed three-dimensional (3D) multigroup steady-state modeling of neutron interactions in complex heterogeneous reactor domains. For the first time we show the fine-mesh neutron scalar flux distribution in Purdue research reactor (that was built over forty years ago). The AGENT methodology is based on the unique combination of the three theories: the method of characteristics (MOC) used to simulate the neutron transport in two-dimensional (2D) whole core heterogeneous calculation, the theory of R-functions used as a mathematical tool to describe the true geometry and fuse with the MOC equations, and one-dimensional (1D) higher-order diffusion correction of 2D transport model to account for full 3D heterogeneous whole core representation. The synergism between the radial 2D transport and the 1D axial transport (to take into account the axial neutron interactions and leakage), called the 2D/1D method (used in DeCART and CHAPLET codes), provides a 3D computational solution. The unique synergism between the AGENT geometrical algorithm capable of modeling any current or future reactor core geometry and 3D neutron transport methodology is described in details. The 3D AGENT accuracy and its efficiency are demonstrated showing the eigenvalues, point-wise flux and reaction rate distributions in representative reactor geometries. The AGENT code, comprising this synergism, represents a building block of the computational system, called the virtual reactor. Its main purpose is to perform 'virtual' experiments and demonstrations of various mainly university research reactor experiments
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei
2010-07-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial
A robust, efficient and accurate β- pdf integration algorithm in nonpremixed turbulent combustion
Liu, H.; Lien, F.S.; Chui, E.
2005-01-01
Among many presumed-shape pdf approaches, the presumed β-function pdf is widely used in nonpremixed turbulent combustion models in the literature. However, singularity difficulties at Z = 0 and 1, Z being the mixture fraction, may be encountered in the numerical integration of the b-function pdf and there are few publications addressing this issue to date. The present study proposes an efficient, robust and accurate algorithm to overcome these numerical difficulties. The present treatment of the β-pdf integration is firstly used in the Burke-Schumann solution in conjunction with the k - ε turbulent model in the case of CH 4 /H 2 bluff-body jets and flames. Afterward it is extended to a more complex model, the laminar flamelet model, for the same flow. Numerical results obtained by using the proposed β-pdf integration method are compared to experimental values of the velocity field, temperature and constituent mass fraction to illustrate the efficiency and accuracy of the present method. (author)
Optimized efficiency in InP nanowire solar cells with accurate 1D analysis
Chen, Yang; Kivisaari, Pyry; Pistol, Mats-Erik; Anttu, Nicklas
2018-01-01
Semiconductor nanowire arrays are a promising candidate for next generation solar cells due to enhanced absorption and reduced material consumption. However, to optimize their performance, time consuming three-dimensional (3D) opto-electronics modeling is usually performed. Here, we develop an accurate one-dimensional (1D) modeling method for the analysis. The 1D modeling is about 400 times faster than 3D modeling and allows direct application of concepts from planar pn-junctions on the analysis of nanowire solar cells. We show that the superposition principle can break down in InP nanowires due to strong surface recombination in the depletion region, giving rise to an IV-behavior similar to that with low shunt resistance. Importantly, we find that the open-circuit voltage of nanowire solar cells is typically limited by contact leakage. Therefore, to increase the efficiency, we have investigated the effect of high-bandgap GaP carrier-selective contact segments at the top and bottom of the InP nanowire and we find that GaP contact segments improve the solar cell efficiency. Next, we discuss the merit of p-i-n and p-n junction concepts in nanowire solar cells. With GaP carrier selective top and bottom contact segments in the InP nanowire array, we find that a p-n junction design is superior to a p-i-n junction design. We predict a best efficiency of 25% for a surface recombination velocity of 4500 cm s-1, corresponding to a non-radiative lifetime of 1 ns in p-n junction cells. The developed 1D model can be used for general modeling of axial p-n and p-i-n junctions in semiconductor nanowires. This includes also LED applications and we expect faster progress in device modeling using our method.
Accurate determination of the charge transfer efficiency of photoanodes for solar water splitting.
Klotz, Dino; Grave, Daniel A; Rothschild, Avner
2017-08-09
The oxygen evolution reaction (OER) at the surface of semiconductor photoanodes is critical for photoelectrochemical water splitting. This reaction involves photo-generated holes that oxidize water via charge transfer at the photoanode/electrolyte interface. However, a certain fraction of the holes that reach the surface recombine with electrons from the conduction band, giving rise to the surface recombination loss. The charge transfer efficiency, η t , defined as the ratio between the flux of holes that contribute to the water oxidation reaction and the total flux of holes that reach the surface, is an important parameter that helps to distinguish between bulk and surface recombination losses. However, accurate determination of η t by conventional voltammetry measurements is complicated because only the total current is measured and it is difficult to discern between different contributions to the current. Chopped light measurement (CLM) and hole scavenger measurement (HSM) techniques are widely employed to determine η t , but they often lead to errors resulting from instrumental as well as fundamental limitations. Intensity modulated photocurrent spectroscopy (IMPS) is better suited for accurate determination of η t because it provides direct information on both the total photocurrent and the surface recombination current. However, careful analysis of IMPS measurements at different light intensities is required to account for nonlinear effects. This work compares the η t values obtained by these methods using heteroepitaxial thin-film hematite photoanodes as a case study. We show that a wide spread of η t values is obtained by different analysis methods, and even within the same method different values may be obtained depending on instrumental and experimental conditions such as the light source and light intensity. Statistical analysis of the results obtained for our model hematite photoanode show good correlation between different methods for
Real-space quadrature: A convenient, efficient representation for multipole expansions
Rogers, David M.
2015-01-01
Multipoles are central to the theory and modeling of polarizable and nonpolarizable molecular electrostatics. This has made a representation in terms of point charges a highly sought after goal, since rotation of multipoles is a bottleneck in molecular dynamics implementations. All known point charge representations are orders of magnitude less efficient than spherical harmonics due to either using too many fixed charge locations or due to nonlinear fitting of fewer charge locations. We present the first complete solution to this problem—completely replacing spherical harmonic basis functions by a dramatically simpler set of weights associated to fixed, discrete points on a sphere. This representation is shown to be space optimal. It reduces the spherical harmonic decomposition of Poisson’s operator to pairwise summations over the point set. As a corollary, we also shows exact quadrature-based formulas for contraction over trace-free supersymmetric 3D tensors. Moreover, multiplication of spherical harmonic basis functions translates to a direct product in this representation
Andrej Ficko
2015-03-01
Full Text Available Underuse of nonindustrial private forests in developed countries has been interpreted mostly as a consequence of the prevailing noncommodity objectives of their owners. Recent empirical studies have indicated a correlation between the harvesting behavior of forest owners and the specific conceptualization of appropriate forest management described as "nonintervention" or "hands-off" management. We aimed to fill the huge gap in knowledge of social representations of forest management in Europe and are the first to be so rigorous in eliciting forest owner representations in Europe. We conducted 3099 telephone interviews with randomly selected forest owners in Slovenia, asking them whether they thought they managed their forest efficiently, what the possible reasons for underuse were, and what they understood by forest management. Building on social representations theory and applying a series of structural equation models, we tested the existence of three latent constructs of forest management and estimated whether and how much these constructs correlated to the perception of resource efficiency. Forest owners conceptualized forest management as a mixture of maintenance and ecosystem-centered and economics-centered management. None of the representations had a strong association with the perception of resource efficiency, nor could it be considered a factor preventing forest owners from cutting more. The underuse of wood resources was mostly because of biophysical constraints in the environment and not a deep-seated philosophical objection to harvesting. The difference between our findings and other empirical studies is primarily explained by historical differences in forestland ownership in different parts of Europe and the United States, the rising number of nonresidential owners, alternative lifestyle, and environmental protectionism, but also as a consequence of our high methodological rigor in testing the relationships between the constructs
Li, Pei-Nan; Li, Hong; Wu, Mo-Li; Wang, Shou-Yu; Kong, Qing-You; Zhang, Zhen; Sun, Yuan; Liu, Jia; Lv, De-Cheng
2012-01-01
Wound measurement is an objective and direct way to trace the course of wound healing and to evaluate therapeutic efficacy. Nevertheless, the accuracy and efficiency of the current measurement methods need to be improved. Taking the advantages of reliability of transparency tracing and the accuracy of computer-aided digital imaging, a transparency-based digital imaging approach is established, by which data from 340 wound tracing were collected from 6 experimental groups (8 rats/group) at 8 experimental time points (Day 1, 3, 5, 7, 10, 12, 14 and 16) and orderly archived onto a transparency model sheet. This sheet was scanned and its image was saved in JPG form. Since a set of standard area units from 1 mm2 to 1 cm2 was integrated into the sheet, the tracing areas in JPG image were measured directly, using the “Magnetic lasso tool” in Adobe Photoshop program. The pixel values/PVs of individual outlined regions were obtained and recorded in an average speed of 27 second/region. All PV data were saved in an excel form and their corresponding areas were calculated simultaneously by the formula of Y (PV of the outlined region)/X (PV of standard area unit) × Z (area of standard unit). It took a researcher less than 3 hours to finish area calculation of 340 regions. In contrast, over 3 hours were expended by three skillful researchers to accomplish the above work with traditional transparency-based method. Moreover, unlike the results obtained traditionally, little variation was found among the data calculated by different persons and the standard area units in different sizes and shapes. Given its accurate, reproductive and efficient properties, this transparency-based digital imaging approach would be of significant values in basic wound healing research and clinical practice. PMID:22666449
Reijneveld Symen A
2011-08-01
Full Text Available Abstract Background Questionnaires used by health services to identify children with psychosocial problems are often rather short. The psychometric properties of such short questionnaires are mostly less than needed for an accurate distinction between children with and without problems. We aimed to assess whether a short Computerized Adaptive Test (CAT can overcome the weaknesses of short written questionnaires when identifying children with psychosocial problems. Method We used a Dutch national data set obtained from parents of children invited for a routine health examination by Preventive Child Healthcare with 205 items on behavioral and emotional problems (n = 2,041, response 84%. In a random subsample we determined which items met the requirements of an Item Response Theory (IRT model to a sufficient degree. Using those items, item parameters necessary for a CAT were calculated and a cut-off point was defined. In the remaining subsample we determined the validity and efficiency of a Computerized Adaptive Test using simulation techniques, with current treatment status and a clinical score on the Total Problem Scale (TPS of the Child Behavior Checklist as criteria. Results Out of 205 items available 190 sufficiently met the criteria of the underlying IRT model. For 90% of the children a score above or below cut-off point could be determined with 95% accuracy. The mean number of items needed to achieve this was 12. Sensitivity and specificity with the TPS as a criterion were 0.89 and 0.91, respectively. Conclusion An IRT-based CAT is a very promising option for the identification of psychosocial problems in children, as it can lead to an efficient, yet high-quality identification. The results of our simulation study need to be replicated in a real-life administration of this CAT.
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
An Accurate and Efficient User Authentication Mechanism on Smart Glasses Based on Iris Recognition
Yung-Hui Li
2017-01-01
Full Text Available In modern society, mobile devices (such as smart phones and wearable devices have become indispensable to almost everyone, and people store personal data in devices. Therefore, how to implement user authentication mechanism for private data protection on mobile devices is a very important issue. In this paper, an intelligent iris recognition mechanism is designed to solve the problem of user authentication in wearable smart glasses. Our contributions include hardware and software. On the hardware side, we design a set of internal infrared camera modules, including well-designed infrared light source and lens module, which is able to take clear iris images within 2~5 cm. On the software side, we propose an innovative iris segmentation algorithm which is both efficient and accurate to be used on smart glasses device. Another improvement to the traditional iris recognition is that we propose an intelligent Hamming distance (HD threshold adaptation method which dynamically fine-tunes the HD threshold used for verification according to empirical data collected. Our final system can perform iris recognition with 66 frames per second on a smart glasses platform with 100% accuracy. As far as we know, this system is the world’s first application of iris recognition on smart glasses.
Efficient Online Aggregates in Dense-Region-Based Data Cube Representations
Haddadin, Kais; Lauer, Tobias
In-memory OLAP systems require a space-efficient representation of sparse data cubes in order to accommodate large data sets. On the other hand, most efficient online aggregation techniques, such as prefix sums, are built on dense array-based representations. These are often not applicable to real-world data due to the size of the arrays which usually cannot be compressed well, as most sparsity is removed during pre-processing. A possible solution is to identify dense regions in a sparse cube and only represent those using arrays, while storing sparse data separately, e.g. in a spatial index structure. Previous dense-region-based approaches have concentrated mainly on the effectiveness of the dense-region detection (i.e. on the space-efficiency of the result). However, especially in higher-dimensional cubes, data is usually more cluttered, resulting in a potentially large number of small dense regions, which negatively affects query performance on such a structure. In this paper, our focus is not only on space-efficiency but also on time-efficiency, both for the initial dense-region extraction and for queries carried out in the resulting hybrid data structure. We describe two methods to trade available memory for increased aggregate query performance. In addition, optimizations in our approach significantly reduce the time to build the initial data structure compared to former systems. Also, we present a straightforward adaptation of our approach to support multi-core or multi-processor architectures, which can further enhance query performance. Experiments with different real-world data sets show how various parameter settings can be used to adjust the efficiency and effectiveness of our algorithms.
Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris
2011-01-01
A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.
Zhu, Jun; Chen, Lijun; Ma, Lantao; Li, Dejian; Jiang, Wei; Pan, Lihong; Shen, Huiting; Jia, Hongmin; Hsiang, Chingyun; Cheng, Guojie; Ling, Li; Chen, Shijie; Wang, Jun; Liao, Wenkui; Zhang, Gary
2014-04-01
Defect review is a time consuming job. Human error makes result inconsistent. The defects located on don't care area would not hurt the yield and no need to review them such as defects on dark area. However, critical area defects can impact yield dramatically and need more attention to review them such as defects on clear area. With decrease in integrated circuit dimensions, mask defects are always thousands detected during inspection even more. Traditional manual or simple classification approaches are unable to meet efficient and accuracy requirement. This paper focuses on automatic defect management and classification solution using image output of Lasertec inspection equipment and Anchor pattern centric image process technology. The number of mask defect found during an inspection is always in the range of thousands or even more. This system can handle large number defects with quick and accurate defect classification result. Our experiment includes Die to Die and Single Die modes. The classification accuracy can reach 87.4% and 93.3%. No critical or printable defects are missing in our test cases. The missing classification defects are 0.25% and 0.24% in Die to Die mode and Single Die mode. This kind of missing rate is encouraging and acceptable to apply on production line. The result can be output and reloaded back to inspection machine to have further review. This step helps users to validate some unsure defects with clear and magnification images when captured images can't provide enough information to make judgment. This system effectively reduces expensive inline defect review time. As a fully inline automated defect management solution, the system could be compatible with current inspection approach and integrated with optical simulation even scoring function and guide wafer level defect inspection.
An efficient and accurate 3D displacements tracking strategy for digital volume correlation
Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles
2014-01-01
inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid
García-Vela, A.
2000-05-01
A definition of a quantum-type phase-space distribution is proposed in order to represent the initial state of the system in a classical dynamics simulation. The central idea is to define an initial quantum phase-space state of the system as the direct product of the coordinate and momentum representations of the quantum initial state. The phase-space distribution is then obtained as the square modulus of this phase-space state. The resulting phase-space distribution closely resembles the quantum nature of the system initial state. The initial conditions are sampled with the distribution, using a grid technique in phase space. With this type of sampling the distribution of initial conditions reproduces more faithfully the shape of the original phase-space distribution. The method is applied to generate initial conditions describing the three-dimensional state of the Ar-HCl cluster prepared by ultraviolet excitation. The photodissociation dynamics is simulated by classical trajectories, and the results are compared with those of a wave packet calculation. The classical and quantum descriptions are found in good agreement for those dynamical events less subject to quantum effects. The classical result fails to reproduce the quantum mechanical one for the more strongly quantum features of the dynamics. The properties and applicability of the phase-space distribution and the sampling technique proposed are discussed.
Zavitsas, Andreas A
2012-08-23
Viscosities of aqueous solutions of many highly soluble hydrophilic solutes with hydroxyl and amino groups are examined with a focus on improving the concentration range over which Einstein's relationship between solution viscosity and solute volume, V, is applicable accurately. V is the hydrodynamic effective volume of the solute, including any water strongly bound to it and acting as a single entity with it. The widespread practice is to relate the relative viscosity of solute to solvent, η/η(0), to V/V(tot), where V(tot) is the total volume of the solution. For solutions that are not infinitely dilute, it is shown that the volume ratio must be expressed as V/V(0), where V(0) = V(tot) - V. V(0) is the volume of water not bound to the solute, the "free" water solvent. At infinite dilution, V/V(0) = V/V(tot). For the solutions examined, the proportionality constant between the relative viscosity and volume ratio is shown to be 2.9, rather than the 2.5 commonly used. To understand the phenomena relating to viscosity, the hydrodynamic effective volume of water is important. It is estimated to be between 54 and 85 cm(3). With the above interpretations of Einstein's equation, which are consistent with his stated reasoning, the relation between the viscosity and volume ratio remains accurate to much higher concentrations than those attainable with any of the other relations examined that express the volume ratio as V/V(tot).
Margot Gerritsen
2008-10-31
Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids
Gabbard, Carl
2015-04-01
Recent research findings indicate that with older adulthood, there are functional decrements in spatial cognition and more specially, in the ability to mentally represent and effectively plan motor actions. A typical finding is a significant over- or underestimation of one's actual physical abilities with movement planning-planning that has implications for movement efficiency and physical safety. A practical, daily life example is estimation of reachability--a situation that for the elderly may be linked with fall incidence. A strategy used to mentally represent action is the use of motor imagery--an ability that also declines with advancing older age. This brief review highlights research findings on mental representation and motor imagery in the elderly and addresses the implications for improving movement efficiency and lowering the risk of movement-related injury. © The Author(s) 2013.
Le, Long N; Jones, Douglas L
2018-03-01
Audio classification techniques often depend on the availability of a large labeled training dataset for successful performance. However, in many application domains of audio classification (e.g., wildlife monitoring), obtaining labeled data is still a costly and laborious process. Motivated by this observation, a technique is proposed to efficiently learn a clean template from a few labeled, but likely corrupted (by noise and interferences), data samples. This learning can be done efficiently via tensorial dynamic time warping on the articulation index-based time-frequency representations of audio data. The learned template can then be used in audio classification following the standard template-based approach. Experimental results show that the proposed approach outperforms both (1) the recurrent neural network approach and (2) the state-of-the-art in the template-based approach on a wildlife detection application with few training samples.
Efficient and Accurate Computational Framework for Injector Design and Analysis, Phase I
National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...
Liu, Meilin; Bagci, Hakan
2011-01-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results
2012-12-01
Backcalculation of pavement moduli has been an intensively researched subject for more than four decades. Despite the existence of many backcalculation programs employing different backcalculation procedures and algorithms, accurate inverse of the la...
Novel software for quantitative evaluation and graphical representation of masticatory efficiency.
Halazonetis, D J; Schimmel, M; Antonarakis, G S; Christou, P
2013-05-01
Blending of chewing gums of different colours is used in the clinical setting, as a simple and reliable means for the assessment of chewing efficiency. However, the available software is difficult to use in an everyday clinical setting, and there is no possibility of automated classification of the patient's chewing ability in a graph, to facilitate visualisation of the results and to evaluate potential chewing difficulties. The aims of this study were to test the validity of ViewGum - a novel image analysis software for the evaluation of boli derived from a two-colour mixing ability test - and to establish a baseline graph for the representation of the masticatory efficiency in a healthy population. Image analysis demonstrated significant hue variation decrease as the number of chewing cycles increased, indicating a higher degree of colour mixture. Standard deviation of hue (SDHue) was significantly different between all chewing cycles. Regression of the log-transformed values of the medians of SDHue on the number of chewing cycles showed a high statistically significant correlation (r² = 0.94, P test methods by the simplicity of its application. The newly developed ViewGum software provides speed, ease of use and immediate extraction of clinically useful conclusions to the already established method of chewing efficiency evaluation and is a valid adjunct for the evaluation of masticatory efficiency with two-colour chewing gum. © 2013 Blackwell Publishing Ltd.
Vogels, Antonius G. C.; Jacobusse, Gert W.; Reijneveld, Symen A.
2011-01-01
Background: Questionnaires used by health services to identify children with psychosocial problems are often rather short. The psychometric properties of such short questionnaires are mostly less than needed for an accurate distinction between children with and without problems. We aimed to assess
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
On an efficient and accurate method to integrate restricted three-body orbits
Murison, Marc A.
1989-01-01
This work is a quantitative analysis of the advantages of the Bulirsch-Stoer (1966) method, demonstrating that this method is certainly worth considering when working with small N dynamical systems. The results, qualitatively suspected by many users, are quantitatively confirmed as follows: (1) the Bulirsch-Stoer extrapolation method is very fast and moderately accurate; (2) regularization of the equations of motion stabilizes the error behavior of the method and is, of course, essential during close approaches; and (3) when applicable, a manifold-correction algorithm reduces numerical errors to the limits of machine accuracy. In addition, for the specific case of the restricted three-body problem, even a small eccentricity for the orbit of the primaries drastically affects the accuracy of integrations, whether regularized or not; the circular restricted problem integrates much more accurately.
Liu, Meilin
2011-07-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.
Efficient and accurate laser shaping with liquid crystal spatial light modulators
Maxson, Jared M.; Bartnik, Adam C.; Bazarov, Ivan V. [Cornell Laboratory for Accelerator-Based Sciences and Education, Cornell University, Ithaca, New York 14853 (United States)
2014-10-27
A phase-only spatial light modulator (SLM) is capable of precise transverse laser shaping by either functioning as a variable phase grating or by serving as a variable mask via polarization rotation. As a phase grating, the highest accuracy algorithms, based on computer generated holograms (CGHs), have been shown to yield extended laser shapes with <10% rms error, but conversely little is known about the experimental efficiency of the method in general. In this work, we compare the experimental tradeoff between error and efficiency for both the best known CGH method and polarization rotation-based intensity masking when generating hard-edged flat top beams. We find that the masking method performs comparably with CGHs, both having rms error < 10% with efficiency > 15%. Informed by best practices for high efficiency from a SLM phase grating, we introduce an adaptive refractive algorithm which has high efficiency (92%) but also higher error (16%), for nearly cylindrically symmetric cases.
Liu, Meilin
2012-08-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.
Liu, Meilin; Sirenko, Kostyantyn; Bagci, Hakan
2012-01-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
Yungeun Kim
2012-01-01
Full Text Available Indoor localization systems typically locate users on their own local coordinates, while outdoor localization systems use global coordinates. To achieve seamless localization from outdoors to indoors, a handover technique that accurately provides a starting position to the indoor localization system is needed. However, existing schemes assume that a starting position is known a priori or uses a naïve approach to consider the last location obtained from GPS as the handover point. In this paper, we propose an accurate handover scheme that monitors the signal-to-noise ratio (SNR of the effective GPS satellites that are selected according to their altitude. We also propose an energy-efficient handover mechanism that reduces the GPS sampling interval gradually. Accuracy and energy efficiency are experimentally validated with the GPS logs obtained in real life.
Brandenburg, Jan Gerit; Caldeweyher, Eike; Grimme, Stefan
2016-06-21
We extend the recently introduced PBEh-3c global hybrid density functional [S. Grimme et al., J. Chem. Phys., 2015, 143, 054107] by a screened Fock exchange variant based on the Henderson-Janesko-Scuseria exchange hole model. While the excellent performance of the global hybrid is maintained for small covalently bound molecules, its performance for computed condensed phase mass densities is further improved. Most importantly, a speed up of 30 to 50% can be achieved and especially for small orbital energy gap cases, the method is numerically much more robust. The latter point is important for many applications, e.g., for metal-organic frameworks, organic semiconductors, or protein structures. This enables an accurate density functional based electronic structure calculation of a full DNA helix structure on a single core desktop computer which is presented as an example in addition to comprehensive benchmark results.
A Highly Accurate and Efficient Analytical Approach to Bridge Deck Free Vibration Analysis
D.J. Gorman
2000-01-01
Full Text Available The superposition method is employed to obtain an accurate analytical type solution for the free vibration frequencies and mode shapes of multi-span bridge decks. Free edge conditions are imposed on the long edges running in the direction of the deck. Inter-span support is of the simple (knife-edge type. The analysis is valid regardless of the number of spans or their individual lengths. Exact agreement is found when computed results are compared with known eigenvalues for bridge decks with all spans of equal length. Mode shapes and eigenvalues are presented for typical bridge decks of three and four span lengths. In each case torsional and non-torsional modes are studied.
Kramer, T; Heller, E J; Parrott, R E
2008-01-01
Time-dependent quantum mechanics provides an intuitive picture of particle propagation in external fields. Semiclassical methods link the classical trajectories of particles with their quantum mechanical propagation. Many analytical results and a variety of numerical methods have been developed to solve the time-dependent Schroedinger equation. The time-dependent methods work for nearly arbitrarily shaped potentials, including sources and sinks via complex-valued potentials. Many quantities are measured at fixed energy, which is seemingly not well suited for a time-dependent formulation. Very few methods exist to obtain the energy-dependent Green function for complicated potentials without resorting to ensemble averages or using certain lead-in arrangements. Here, we demonstrate in detail a time-dependent approach, which can accurately and effectively construct the energy-dependent Green function for very general potentials. The applications of the method are numerous, including chemical, mesoscopic, and atomic physics
Xiangyang, Zhu; Hua, Dai; Xun, Yi; Geng, Yang; Xiao, Li
2017-01-01
With the development of cloud computing, services outsourcing in clouds has become a popular business model. However, due to the fact that data storage and computing are completely outsourced to the cloud service provider, sensitive data of data owners is exposed, which could bring serious privacy disclosure. In addition, some unexpected events, such as software bugs and hardware failure, could cause incomplete or incorrect results returned from clouds. In this paper, we propose an efficient ...
An accurate and computationally efficient small-scale nonlinear FEA of flexible risers
Rahmati, MT; Bahai, H; Alfano, G
2016-01-01
This paper presents a highly efficient small-scale, detailed finite-element modelling method for flexible risers which can be effectively implemented in a fully-nested (FE2) multiscale analysis based on computational homogenisation. By exploiting cyclic symmetry and applying periodic boundary conditions, only a small fraction of a flexible pipe is used for a detailed nonlinear finite-element analysis at the small scale. In this model, using three-dimensional elements, all layer components are...
MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices
Chen, Sheng; Liu, Yang; Gao, Xiang; Han, Zhen
2018-01-01
In this paper, we proposed a class of extremely efficient CNN models, MobileFaceNets, which use less than 1 million parameters and are specifically tailored for high-accuracy real-time face verification on mobile and embedded devices. We first make a simple analysis on the weakness of common mobile networks for face verification. The weakness has been well overcome by our specifically designed MobileFaceNets. Under the same experimental conditions, our MobileFaceNets achieve significantly sup...
TOPLHA: an accurate and efficient numerical tool for analysis and design of LH antennas
Milanesio, D.; Lancellotti, V.; Meneghini, O.; Maggiora, R.; Vecchi, G.; Bilato, R.
2007-01-01
Auxiliary ICRF heating systems in tokamaks often involve large complex antennas, made up of several conducting straps hosted in distinct cavities that open towards the plasma. The same holds especially true in the LH regime, wherein the antennas are comprised of arrays of many phased waveguides. Upon observing that the various cavities or waveguides couple to each other only through the EM fields existing over the plasma-facing apertures, we self-consistently formulated the EM problem by a convenient set of multiple coupled integral equations. Subsequent application of the Method of Moments yields a highly sparse algebraic system; therefore formal inversion of the system matrix happens to be not so memory demanding, despite the number of unknowns may be quite large (typically 10 5 or so). The overall strategy has been implemented in an enhanced version of TOPICA (Torino Polytechnic Ion Cyclotron Antenna) and in a newly developed code named TOPLHA (Torino Polytechnic Lower Hybrid Antenna). Both are simulation and prediction tools for plasma facing antennas that incorporate commercial-grade 3D graphic interfaces along with an accurate description of the plasma. In this work we present the new proposed formulation along with examples of application to real life large LH antenna systems
Johnston, Hans; Liu Jianguo
2004-01-01
We present numerical schemes for the incompressible Navier-Stokes equations based on a primitive variable formulation in which the incompressibility constraint has been replaced by a pressure Poisson equation. The pressure is treated explicitly in time, completely decoupling the computation of the momentum and kinematic equations. The result is a class of extremely efficient Navier-Stokes solvers. Full time accuracy is achieved for all flow variables. The key to the schemes is a Neumann boundary condition for the pressure Poisson equation which enforces the incompressibility condition for the velocity field. Irrespective of explicit or implicit time discretization of the viscous term in the momentum equation the explicit time discretization of the pressure term does not affect the time step constraint. Indeed, we prove unconditional stability of the new formulation for the Stokes equation with explicit treatment of the pressure term and first or second order implicit treatment of the viscous term. Systematic numerical experiments for the full Navier-Stokes equations indicate that a second order implicit time discretization of the viscous term, with the pressure and convective terms treated explicitly, is stable under the standard CFL condition. Additionally, various numerical examples are presented, including both implicit and explicit time discretizations, using spectral and finite difference spatial discretizations, demonstrating the accuracy, flexibility and efficiency of this class of schemes. In particular, a Galerkin formulation is presented requiring only C 0 elements to implement
He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei
2012-06-25
Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the
On the implementation of an accurate and efficient solver for convection-diffusion equations
Wu, Chin-Tien
In this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addition, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from
Fast and accurate determination of the detergent efficiency by optical fiber sensors
Patitsa, Maria; Pfeiffer, Helge; Wevers, Martine
2011-06-01
An optical fiber sensor was developed to control the cleaning efficiency of surfactants. Prior to the measurements, the sensing part of the probe is covered with a uniform standardized soil layer (lipid multilayer), and a gold mirror is deposited at the end of the optical fiber. For the lipid multilayer deposition on the fiber, Langmuir-Blodgett technique was used and the progress of deposition was followed online by ultraviolet spectroscopy. The invention provides a miniaturized Surface Plasmon Resonance dip-sensor for automated on-line testing that can replace the cost and time consuming existing methods and develop a breakthrough in detergent testing in combining optical sensing, surface chemistry and automated data acquisition. The sensor is to be used to evaluate detergency of different cleaning products and also indicate how formulation, concentration, lipid nature and temperature affect the cleaning behavior of a surfactant.
Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads.
Song, Li; Florea, Liliana
2015-01-01
Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. We developed a k-mer based method, Rcorrector, to correct random sequencing errors in Illumina RNA-seq reads. Rcorrector uses a De Bruijn graph to compactly represent all trusted k-mers in the input reads. Unlike WGS read correctors, which use a global threshold to determine trusted k-mers, Rcorrector computes a local threshold at every position in a read. Rcorrector has an accuracy higher than or comparable to existing methods, including the only other method (SEECER) designed for RNA-seq reads, and is more time and memory efficient. With a 5 GB memory footprint for 100 million reads, it can be run on virtually any desktop or server. The software is available free of charge under the GNU General Public License from https://github.com/mourisl/Rcorrector/.
Tresley, Jonathan; Jose, Jean
2015-04-01
Osteoarthritis of the knee can be a debilitating and extremely painful condition. In patients who desire to postpone knee arthroplasty or in those who are not surgical candidates, percutaneous knee injection therapies have the potential to reduce pain and swelling, maintain joint mobility, and minimize disability. Published studies cite poor accuracy of intra-articular knee joint injections without imaging guidance. We present a sonographically guided posteromedial approach to intra-articular knee joint injections with 100% accuracy and no complications in a consecutive series of 67 patients undergoing subsequent computed tomographic or magnetic resonance arthrography. Although many other standard approaches are available, a posteromedial intra-articular technique is particularly useful in patients with a large body habitus and theoretically allows for simultaneous aspiration of Baker cysts with a single sterile preparation and without changing the patient's position. The posteromedial technique described in this paper is not compared or deemed superior to other standard approaches but, rather, is presented as a potentially safe and efficient alternative. © 2015 by the American Institute of Ultrasound in Medicine.
Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal
2008-07-01
UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without explicitly requiring all dissimilarities in memory. The algorithms are general and are applicable to any dataset. We present a data-dependent characterization of hardness and clustering efficiency. The presented concepts are applicable to any agglomerative clustering formulation. We apply our algorithm to the entire collection of protein sequences, to automatically build a comprehensive evolutionary-driven hierarchy of proteins from sequence alone. The newly created tree captures protein families better than state-of-the-art large-scale methods such as CluSTr, ProtoNet4 or single-linkage clustering. We demonstrate that leveraging the entire mass embodied in all sequence similarities allows to significantly improve on current protein family clusterings which are unable to directly tackle the sheer mass of this data. Furthermore, we argue that non-metric constraints are an inherent complexity of the sequence space and should not be overlooked. The robustness of UPGMA allows significant improvement, especially for multidomain proteins, and for large or divergent families. A comprehensive tree built from all UniProt sequence similarities, together with navigation and classification tools will be made available as part of the ProtoNet service. A C++ implementation of the algorithm is available on request.
Efficient 2-D DCT Computation from an Image Representation Point of View
Papakostas, G.A.; Koulouriotis, D.E.; Karakasis, E.G.
2009-01-01
A novel methodology that ensures the computation of 2-D DCT coefficients in gray-scale images as well as in binary ones, with high computation rates, was presented in the previous sections. Through a new image representation scheme, called ISR (Image Slice Representation) the 2-D DCT coefficients can be computed in significantly reduced time, with the same accuracy.
Blasques, José Pedro Albergaria Amaral; Bitsche, Robert
2015-01-01
This paper proposes a novel, efficient, and accurate framework for fracture analysis of beam structures with longitudinal cracks. The three-dimensional local stress field is determined using a high-fidelity beam model incorporating a finite element based cross section analysis tool. The Virtual...... Crack Closure Technique is used for computation of strain energy release rates. The devised framework was employed for analysis of cracks in beams with different cross section geometries. The results show that the accuracy of the proposed method is comparable to that of conventional three......-dimensional solid finite element models while using only a fraction of the computation time....
Huang, P-C; Hsu, C-H [Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan (China); Hsiao, I-T [Department Medical Imaging and Radiological Sciences, Chang Gung University, Tao-Yuan, Taiwan (China); Lin, K M [Medical Engineering Research Division, National Health Research Institutes, Zhunan Town, Miaoli County, Taiwan (China)], E-mail: cghsu@mx.nthu.edu.tw
2009-06-15
Accurate modeling of the photon acquisition process in pinhole SPECT is essential for optimizing resolution. In this work, the authors develop an accurate system model in which pinhole finite aperture and depth-dependent geometric sensitivity are explicitly included. To achieve high-resolution pinhole SPECT, the voxel size is usually set in the range of sub-millimeter so that the total number of image voxels increase accordingly. It is inevitably that a system matrix that models a variety of favorable physical factors will become extremely sophisticated. An efficient implementation for such an accurate system model is proposed in this research. We first use the geometric symmetries to reduce redundant entries in the matrix. Due to the sparseness of the matrix, only non-zero terms are stored. A novel center-to-radius recording rule is also developed to effectively describe the relation between a voxel and its related detectors at every projection angle. The proposed system matrix is also suitable for multi-threaded computing. Finally, the accuracy and effectiveness of the proposed system model is evaluated in a workstation equipped with two Quad-Core Intel X eon processors.
Sharma, Harshita; Zerbe, Norman; Heim, Daniel; Wienert, Stephan; Lohmann, Sebastian; Hellwich, Olaf; Hufnagl, Peter
2016-03-01
This paper describes a novel graph-based method for efficient representation and subsequent classification in histological whole slide images of gastric cancer. Her2/neu immunohistochemically stained and haematoxylin and eosin stained histological sections of gastric carcinoma are digitized. Immunohistochemical staining is used in practice by pathologists to determine extent of malignancy, however, it is laborious to visually discriminate the corresponding malignancy levels in the more commonly used haematoxylin and eosin stain, and this study attempts to solve this problem using a computer-based method. Cell nuclei are first isolated at high magnification using an automatic cell nuclei segmentation strategy, followed by construction of cell nuclei attributed relational graphs of the tissue regions. These graphs represent tissue architecture comprehensively, as they contain information about cell nuclei morphology as vertex attributes, along with knowledge of neighborhood in the form of edge linking and edge attributes. Global graph characteristics are derived and ensemble learning is used to discriminate between three types of malignancy levels, namely, non-tumor, Her2/neu positive tumor and Her2/neu negative tumor. Performance is compared with state of the art methods including four texture feature groups (Haralick, Gabor, Local Binary Patterns and Varma Zisserman features), color and intensity features, and Voronoi diagram and Delaunay triangulation. Texture, color and intensity information is also combined with graph-based knowledge, followed by correlation analysis. Quantitative assessment is performed using two cross validation strategies. On investigating the experimental results, it can be concluded that the proposed method provides a promising way for computer-based analysis of histopathological images of gastric cancer.
Meyer, Toni; Körner, Christian; Vandewal, Koen; Leo, Karl
2018-04-01
In two terminal tandem solar cells, the current density - voltage (jV) characteristic of the individual subcells is typically not directly measurable, but often required for a rigorous device characterization. In this work, we reconstruct the jV-characteristic of organic solar cells from measurements of the external quantum efficiency under applied bias voltages and illumination. We show that it is necessary to perform a bias irradiance variation at each voltage and subsequently conduct a mathematical correction of the differential to the absolute external quantum efficiency to obtain an accurate jV-characteristic. Furthermore, we show that measuring the external quantum efficiency as a function of voltage for a single bias irradiance of 0.36 AM1.5g equivalent sun provides a good approximation of the photocurrent density over voltage curve. The method is tested on a selection of efficient, common single-junctions. The obtained conclusions can easily be transferred to multi-junction devices with serially connected subcells.
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
Ziegler, Benjamin; Rauhut, Guntram
2016-03-01
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Spectral representations of neutron-star equations of state
Lindblom, Lee
2010-01-01
Methods are developed for constructing spectral representations of cold (barotropic) neutron-star equations of state. These representations are faithful in the sense that every physical equation of state has a representation of this type and conversely every such representation satisfies the minimal thermodynamic stability criteria required of any physical equation of state. These spectral representations are also efficient, in the sense that only a few spectral coefficients are generally required to represent neutron-star equations of state quiet accurately. This accuracy and efficiency is illustrated by constructing spectral fits to a large collection of 'realistic' neutron-star equations of state.
Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data
Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924
Mark D McDonnell
2013-05-01
Full Text Available The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of action-potential arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic action potential, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms.
Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Carrete, Jesús; Toher, Cormac; de Jong, Maarten; Asta, Mark; Fornari, Marco; Nardelli, Marco Buongiorno; Curtarolo, Stefano
2017-10-01
One of the most accurate approaches for calculating lattice thermal conductivity, , is solving the Boltzmann transport equation starting from third-order anharmonic force constants. In addition to the underlying approximations of ab-initio parameterization, two main challenges are associated with this path: high computational costs and lack of automation in the frameworks using this methodology, which affect the discovery rate of novel materials with ad-hoc properties. Here, the Automatic Anharmonic Phonon Library (AAPL) is presented. It efficiently computes interatomic force constants by making effective use of crystal symmetry analysis, it solves the Boltzmann transport equation to obtain , and allows a fully integrated operation with minimum user intervention, a rational addition to the current high-throughput accelerated materials development framework AFLOW. An "experiment vs. theory" study of the approach is shown, comparing accuracy and speed with respect to other available packages, and for materials characterized by strong electron localization and correlation. Combining AAPL with the pseudo-hybrid functional ACBN0 is possible to improve accuracy without increasing computational requirements.
Chronopoulos, Andreas E.; Apostolatos, Theocharis A.
2001-01-01
The network of interferometric detectors that is under construction at various locations on Earth is expected to start searching for gravitational waves in a few years. The number of search templates that is needed to be cross correlated with the noisy output of the detectors is a major issue since computing power capabilities are restricted. By choosing higher and higher post-Newtonian order expansions for the family of search templates we make sure that our filters are more accurate copies of the real waves that hit our detectors. However, this is not the only criterion for choosing a family of search templates. To make the process of detection as efficient as possible, one needs a family of templates with a relatively small number of members that manages to pick up any detectable signal with only a tiny reduction in signal-to-noise ratio. Evidently, one family is better than another if it accomplishes its goal with a smaller number of templates. Following the geometric language of Owen, we have studied the performance of the post 1.5 -Newtonian family of templates on detecting post 2 -Newtonian signals for binaries. Several technical issues arise from the fact that the two types of waveforms cannot be made to coincide by a suitable choice of parameters. In general, the parameter space of the signals is not identical with the parameter space of the templates, although in our case they are of the same dimension, and one has to take into account all such peculiarities before drawing any conclusion. An interesting result we have obtained is that the post 1.5 -Newtonian family of templates happens to be more economical for detecting post 2 -Newtonian signals than the perfectly accurate post 2 -Newtonian family of templates itself. The number of templates is reduced by 20-30%, depending on the acceptable level of reduction in signal-to-noise ratio due to discretization of the family of templates. This makes the post 1.5 -Newtonian family of templates more favorable
STUDY OF SOLUTION REPRESENTATION LANGUAGE INFLUENCE ON EFFICIENCY OF INTEGER SEQUENCES PREDICTION
A. S. Potapov
2015-01-01
Full Text Available Methods based on genetic programming for the problem solution of integer sequences extrapolation are the subjects for study in the paper. In order to check the hypothesis about the influence of language expression of program representation on the prediction effectiveness, the genetic programming method based on several limited languages for recurrent sequences has been developed. On the single sequence sample the implemented method with the use of more complete language has shown results, significantly better than the results of one of the current methods represented in literature based on artificial neural networks. Analysis of experimental comparison results for the realized method with the usage of different languages has shown that language extension increases the difficulty of consistent patterns search in languages, available for prediction in a simpler language though it makes new sequence classes accessible for prediction. This effect can be reduced but not eliminated completely at language extension by the constructions, which make solutions more compact. Carried out researches have drawn to the conclusion that alone the choice of an adequate language for solution representation is not enough for the full problem solution of integer sequences prediction (and, all the more, universal prediction problem. However, practically applied methods can be received by the usage of genetic programming.
Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas Jan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
Adoption of efficient end-use technologies is one of the key measures for reducing greenhouse gas (GHG) emissions. How to effectively analyze and manage the costs associated with GHG reductions becomes extremely important for the industry and policy makers around the world. Energy-climate (EC) models are often used for analyzing the costs of reducing GHG emissions for various emission-reduction measures, because an accurate estimation of these costs is critical for identifying and choosing optimal emission reduction measures, and for developing related policy options to accelerate market adoption and technology implementation. However, accuracies of assessing GHG-emission reduction costs by taking into account the adoption of energy efficiency technologies will depend on how well these end-use technologies are represented in integrated assessment models (IAM) and other energy-climate models. In this report, we first conduct a brief review of different representations of end-use technologies (mitigation measures) in various energy-climate models, followed by the problem statement, and a description of the basic concepts of quantifying the cost of conserved energy including integrating no-regrets options.
Jamil Ahmad
Full Text Available Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches.
Sudarshan, M.; Joseph, J.; Singh, R.
1992-01-01
The validity of various analytical functions and semi-empirical formulae proposed for representing the full energy peak efficiency (FEPE) curves of Ge(Li) and HPGe detectors has been tested for the FEPE of 7.6 cm x 7.6 cm and 5 cm x 5 cm Nal(Tl) detectors in the gamma energy range from 59.5 to 1408.03 keV. The functions proposed by East, and McNelles and Campbell provide by far the best representations of the present data. The semi-empirical formula of Mowatt describes the present data very well. The present investigation shows that some of the analytical functions and semi-empirical formulae, which represent the FEPE of the Ge(Li) and HPGe detectors very well, can be quite fruitfully used for Nal(Tl) detectors. (Author)
Sarkeshi, M.; Mahmoudi, R.; Roermund, van A.H.M.
2009-01-01
Limit-cycle based, self-oscillating amplifiers are promising candidates for linear amplification of complex signals with high peak-to-average ratio, while maintaining high power efficiency. Limit-cycle transmitters employ switch class-D power amplifiers in order to achieve high Efficiency. In this
Sathaye, J.; Xu, T.; Galitsky, C.
2010-08-15
Adoption of efficient end-use technologies is one of the key measures for reducing greenhouse gas (GHG) emissions. How to effectively analyze and manage the costs associated with GHG reductions becomes extremely important for the industry and policy makers around the world. Energy-climate (EC) models are often used for analyzing the costs of reducing GHG emissions for various emission-reduction measures, because an accurate estimation of these costs is critical for identifying and choosing optimal emission reduction measures, and for developing related policy options to accelerate market adoption and technology implementation. However, accuracies of assessing of GHG-emission reduction costs by taking into account the adoption of energy efficiency technologies will depend on how well these end-use technologies are represented in integrated assessment models (IAM) and other energy-climate models.
Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan
2014-08-14
In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.
Rabaza, Ovidio; Gómez-Lorente, Daniel; Pérez-Ocón, Francisco; Peña-García, Antonio
2016-01-01
In this study, new relationships between the energy efficiency of street lighting systems, street width, and luminaire height were derived from the analysis of a large sample of outputs, generated with a software application widely used for lighting design. The result was a quadratic polynomial that perfectly fit the relationships obtained and whose coefficients characterize each type of luminaire. This greatly simplifies the design of lighting facilities because it only uses one equation, but at the same time, takes all necessary variables into account. The procedure maximized the energy efficiency of the street lighting systems, as far as conditions allowed, and greatly facilitated the calculation of the parameters of a basic lighting installation, according to CIE (International Commission on Illumination) recommendations. - Highlights: • New parameter relationships for efficient public lighting design were obtained. • A second-order polynomial simplifies the design of the lighting facilities using only one equation. • The procedure guarantees the maximization of energy efficiency of street lighting systems. • The results have been successfully tested with a well-known and reliable free software.
Guescini, Michele; Sisti, Davide; Rocchi, Marco B L; Panebianco, Renato; Tibollo, Pasquale; Stocchi, Vilberto
2013-01-01
Quantitative real-time PCR represents a highly sensitive and powerful technology for the quantification of DNA. Although real-time PCR is well accepted as the gold standard in nucleic acid quantification, there is a largely unexplored area of experimental conditions that limit the application of the Ct method. As an alternative, our research team has recently proposed the Cy0 method, which can compensate for small amplification variations among the samples being compared. However, when there is a marked decrease in amplification efficiency, the Cy0 is impaired, hence determining reaction efficiency is essential to achieve a reliable quantification. The proposed improvement in Cy0 is based on the use of the kinetic parameters calculated in the curve inflection point to compensate for efficiency variations. Three experimental models were used: inhibition of primer extension, non-optimal primer annealing and a very small biological sample. In all these models, the improved Cy0 method increased quantification accuracy up to about 500% without affecting precision. Furthermore, the stability of this procedure was enhanced integrating it with the SOD method. In short, the improved Cy0 method represents a simple yet powerful approach for reliable DNA quantification even in the presence of marked efficiency variations.
Burkhard, George F.; Hoke, Eric T.; McGehee, Michael D.
2010-01-01
Accurately measuring internal quantum efficiency requires knowledge of absorption in the active layer of a solar cell. The experimentally accessible total absorption includes significant contributions from the electrodes and other nonactive layers. We suggest a straightforward method for calculating the active layer contribution that minimizes error by subtracting optically-modeled electrode absorption from experimentally measured total absorption. (Figure Presented) © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Burkhard, George F.
2010-05-31
Accurately measuring internal quantum efficiency requires knowledge of absorption in the active layer of a solar cell. The experimentally accessible total absorption includes significant contributions from the electrodes and other nonactive layers. We suggest a straightforward method for calculating the active layer contribution that minimizes error by subtracting optically-modeled electrode absorption from experimentally measured total absorption. (Figure Presented) © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine
2017-11-07
Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.
Pineda, M.; Stamatakis, M.
2017-07-01
Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.
De Backer, A.; Bos, K.H.W. van den [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Van den Broek, W. [AG Strukturforschung/Elektronenmikroskopie, Institut für Physik, Humboldt-Universität zu Berlin, Newtonstraße 15, 12489 Berlin (Germany); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Van Aert, S., E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)
2016-12-15
An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. - Highlights: • An efficient model-based method for quantitative electron microscopy is introduced. • Images are modelled as a superposition of 2D Gaussian peaks. • Overlap between neighbouring columns is taken into account. • Structure parameters can be obtained with the highest precision and accuracy. • StatSTEM, auser friendly program (GNU public license) is developed.
Rasmussen, Søren Birk; Perez-Ferreras, Susana; Banares, Miguel A.
2013-01-01
of, for example, support oxides might take place, which in turn affects the pore size distribution and the porosity of the catalyst, leading to the observation of lower activity values due to decreased catalyst efficiency. This phenomenon can also apply to conventional activity measurements......, in the cases that pelletizing and recrushing of samples are performed to obtain adequate particle size fractions for the catalytic bed. A case study of an operand investigation of a V2O3-WO3/TiO2-sepiolite catalyst is used as an example, and simple calculations of the influence of catalyst activity...... and internal pore diffusion properties are considered in this paper for the evaluation of catalyst performance in, for example, operando reactors. Thus, it is demonstrated that with a pelletizing pressure of...
Hwang Fu, Yu-Hsien; Huang, William Y C; Shen, Kuang; Groves, Jay T; Miller, Thomas; Shan, Shu-Ou
2017-07-28
The signal recognition particle (SRP) delivers ~30% of the proteome to the eukaryotic endoplasmic reticulum, or the bacterial plasma membrane. The precise mechanism by which the bacterial SRP receptor, FtsY, interacts with and is regulated at the target membrane remain unclear. Here, quantitative analysis of FtsY-lipid interactions at single-molecule resolution revealed a two-step mechanism in which FtsY initially contacts membrane via a Dynamic mode, followed by an SRP-induced conformational transition to a Stable mode that activates FtsY for downstream steps. Importantly, mutational analyses revealed extensive auto-inhibitory mechanisms that prevent free FtsY from engaging membrane in the Stable mode; an engineered FtsY pre-organized into the Stable mode led to indiscriminate targeting in vitro and disrupted FtsY function in vivo. Our results show that the two-step lipid-binding mechanism uncouples the membrane association of FtsY from its conformational activation, thus optimizing the balance between the efficiency and fidelity of co-translational protein targeting.
Orenstein, Yaron; Wang, Yuhao; Berger, Bonnie
2016-06-15
Protein-RNA interactions, which play vital roles in many processes, are mediated through both RNA sequence and structure. CLIP-based methods, which measure protein-RNA binding in vivo, suffer from experimental noise and systematic biases, whereas in vitro experiments capture a clearer signal of protein RNA-binding. Among them, RNAcompete provides binding affinities of a specific protein to more than 240 000 unstructured RNA probes in one experiment. The computational challenge is to infer RNA structure- and sequence-based binding models from these data. The state-of-the-art in sequence models, Deepbind, does not model structural preferences. RNAcontext models both sequence and structure preferences, but is outperformed by GraphProt. Unfortunately, GraphProt cannot detect structural preferences from RNAcompete data due to the unstructured nature of the data, as noted by its developers, nor can it be tractably run on the full RNACompete dataset. We develop RCK, an efficient, scalable algorithm that infers both sequence and structure preferences based on a new k-mer based model. Remarkably, even though RNAcompete data is designed to be unstructured, RCK can still learn structural preferences from it. RCK significantly outperforms both RNAcontext and Deepbind in in vitro binding prediction for 244 RNAcompete experiments. Moreover, RCK is also faster and uses less memory, which enables scalability. While currently on par with existing methods in in vivo binding prediction on a small scale test, we demonstrate that RCK will increasingly benefit from experimentally measured RNA structure profiles as compared to computationally predicted ones. By running RCK on the entire RNAcompete dataset, we generate and provide as a resource a set of protein-RNA structure-based models on an unprecedented scale. Software and models are freely available at http://rck.csail.mit.edu/ bab@mit.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by
DeGregorio, Nicole; Iyengar, Srinivasan S
2018-01-09
We present two sampling measures to gauge critical regions of potential energy surfaces. These sampling measures employ (a) the instantaneous quantum wavepacket density, an approximation to the (b) potential surface, its (c) gradients, and (d) a Shannon information theory based expression that estimates the local entropy associated with the quantum wavepacket. These four criteria together enable a directed sampling of potential surfaces that appears to correctly describe the local oscillation frequencies, or the local Nyquist frequency, of a potential surface. The sampling functions are then utilized to derive a tessellation scheme that discretizes the multidimensional space to enable efficient sampling of potential surfaces. The sampled potential surface is then combined with four different interpolation procedures, namely, (a) local Hermite curve interpolation, (b) low-pass filtered Lagrange interpolation, (c) the monomial symmetrization approximation (MSA) developed by Bowman and co-workers, and (d) a modified Shepard algorithm. The sampling procedure and the fitting schemes are used to compute (a) potential surfaces in highly anharmonic hydrogen-bonded systems and (b) study hydrogen-transfer reactions in biogenic volatile organic compounds (isoprene) where the transferring hydrogen atom is found to demonstrate critical quantum nuclear effects. In the case of isoprene, the algorithm discussed here is used to derive multidimensional potential surfaces along a hydrogen-transfer reaction path to gauge the effect of quantum-nuclear degrees of freedom on the hydrogen-transfer process. Based on the decreased computational effort, facilitated by the optimal sampling of the potential surfaces through the use of sampling functions discussed here, and the accuracy of the associated potential surfaces, we believe the method will find great utility in the study of quantum nuclear dynamics problems, of which application to hydrogen-transfer reactions and hydrogen
Neese, Frank; Wennmohs, Frank; Hansen, Andreas
2009-03-21
Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Moller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol(-1). Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500
Xu, T.T.; Sathaye, J.; Galitsky, C.
2010-09-30
Adoption of efficient end-use technologies is one of the key measures for reducing greenhouse gas (GHG) emissions. With the working of energy programs and policies on carbon regulation, how to effectively analyze and manage the costs associated with GHG reductions become extremely important for the industry and policy makers around the world. Energy-climate (EC) models are often used for analyzing the costs of reducing GHG emissions (e.g., carbon emission) for various emission-reduction measures, because an accurate estimation of these costs is critical for identifying and choosing optimal emission reduction measures, and for developing related policy options to accelerate market adoption and technology implementation. However, accuracies of assessing of GHG-emission reduction costs by taking into account the adoption of energy efficiency technologies will depend on how well these end-use technologies are represented in integrated assessment models (IAM) and other energy-climate models. In this report, we first conduct brief overview on different representations of end-use technologies (mitigation measures) in various energy-climate models, followed by problem statements, and a description of the basic concepts of quantifying the cost of conserved energy including integrating non-regrets options. A non-regrets option is defined as a GHG reduction option that is cost effective, without considering their additional benefits related to reducing GHG emissions. Based upon these, we develop information on costs of mitigation measures and technological change. These serve as the basis for collating the data on energy savings and costs for their future use in integrated assessment models. In addition to descriptions of the iron and steel making processes, and the mitigation measures identified in this study, the report includes tabulated databases on costs of measure implementation, energy savings, carbon-emission reduction, and lifetimes. The cost curve data on mitigation
Moody, Daniela; Wohlberg, Brendt
2018-01-02
An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.
Lee, Y. C.; Thompson, H. M.; Gaskell, P. H.
2009-12-01
, industrial and physical applications. However, despite recent modelling advances, the accurate numerical solution of the equations governing such problems is still at a relatively early stage. Indeed, recent studies employing a simplifying long-wave approximation have shown that highly efficient numerical methods are necessary to solve the resulting lubrication equations in order to achieve the level of grid resolution required to accurately capture the effects of micro- and nano-scale topographical features. Solution method: A portable parallel multigrid algorithm has been developed for the above purpose, for the particular case of flow over submerged topographical features. Within the multigrid framework adopted, a W-cycle is used to accelerate convergence in respect of the time dependent nature of the problem, with relaxation sweeps performed using a fixed number of pre- and post-Red-Black Gauss-Seidel Newton iterations. In addition, the algorithm incorporates automatic adaptive time-stepping to avoid the computational expense associated with repeated time-step failure. Running time: 1.31 minutes using 128 processors on BlueGene/P with a problem size of over 16.7 million mesh points.
Han Zou
2016-02-01
Full Text Available The location and contextual status (indoor or outdoor is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS for individuals. In addition, optimizations of building management systems (BMS, such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption.
Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua
2016-02-22
The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption.
F. Taheri
2011-04-01
Full Text Available Background and Aims Health, Safety and Environment (HSE performance measurement of the contractors and identification of the best ones can make a perception of the past changes in their HSE performance. Consequently, this may motivate them and provide an opportunity to improve their quality of services. The aim of this study is to rank the contractor-companies of one of the Iranian steel manufacturing companies considering their safety behavior and also to determine the best combination of contractors-companies. Methods safety behavior sampling method used to determine the status of unsafe acts. The fuzzy efficiency numbers of each input were ranked by Chen & Klein method. To obtain a final ranking AHP was applied. The obtained rankings by FIEP-AHP were compared to the ranking of DEA. Results Results indicated that the most frequent unsafe behaviors were related to not-using or miss-using the PPE, using broken tools and inappropriate working condition respectively. A significant relationship between experience, education and age with safety behaviors was obtained (p<0.05. Results showed that companies’ number 2 and 6 had respectively the best and worst ranks. Conclusion Because FIEP increases the power of recognition especially when the number of DMUs is lower than inputs and outputs, it can be suggested as an appropriate model for determining the best contractor companies.
Wulf-Andersen, Trine Østergaard
2012-01-01
, and dialogue, of situated participants. The article includes a lengthy example of a poetic representation of one participant’s story, and the author comments on the potentials of ‘doing’ poetic representations as an example of writing in ways that challenges what sometimes goes unasked in participative social...
Sharp, Leah Z.; Egorova, Dassia; Domcke, Wolfgang
2010-01-01
Two-dimensional (2D) photon-echo spectra of a single subunit of the Fenna-Matthews-Olson (FMO) bacteriochlorophyll trimer of Chlorobium tepidum are simulated, employing the equation-of-motion phase-matching approach (EOM-PMA). We consider a slightly extended version of the previously proposed Frenkel exciton model, which explicitly accounts for exciton coherences in the secular approximation. The study is motivated by a recent experiment reporting long-lived coherent oscillations in 2D transients [Engel et al., Nature 446, 782 (2007)] and aims primarily at accurate simulations of the spectroscopic signals, with the focus on oscillations of 2D peak intensities with population time. The EOM-PMA accurately accounts for finite pulse durations as well as pulse-overlap effects and does not invoke approximations apart from the weak-field limit for a given material system. The population relaxation parameters of the exciton model are taken from the literature. The effects of various dephasing mechanisms on coherence lifetimes are thoroughly studied. It is found that the experimentally detected multiple frequencies in peak oscillations cannot be reproduced by the employed FMO model, which calls for the development of a more sophisticated exciton model of the FMO complex.
Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan
2016-12-01
For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function
Vaisman Iosif I
2010-10-01
Full Text Available Abstract Background HIV-1 targets human cells expressing both the CD4 receptor, which binds the viral envelope glycoprotein gp120, as well as either the CCR5 (R5 or CXCR4 (X4 co-receptors, which interact primarily with the third hypervariable loop (V3 loop of gp120. Determination of HIV-1 affinity for either the R5 or X4 co-receptor on host cells facilitates the inclusion of co-receptor antagonists as a part of patient treatment strategies. A dataset of 1193 distinct gp120 V3 loop peptide sequences (989 R5-utilizing, 204 X4-capable is utilized to train predictive classifiers based on implementations of random forest, support vector machine, boosted decision tree, and neural network machine learning algorithms. An in silico mutagenesis procedure employing multibody statistical potentials, computational geometry, and threading of variant V3 sequences onto an experimental structure, is used to generate a feature vector representation for each variant whose components measure environmental perturbations at corresponding structural positions. Results Classifier performance is evaluated based on stratified 10-fold cross-validation, stratified dataset splits (2/3 training, 1/3 validation, and leave-one-out cross-validation. Best reported values of sensitivity (85%, specificity (100%, and precision (98% for predicting X4-capable HIV-1 virus, overall accuracy (97%, Matthew's correlation coefficient (89%, balanced error rate (0.08, and ROC area (0.97 all reach critical thresholds, suggesting that the models outperform six other state-of-the-art methods and come closer to competing with phenotype assays. Conclusions The trained classifiers provide instantaneous and reliable predictions regarding HIV-1 co-receptor usage, requiring only translated V3 loop genotypes as input. Furthermore, the novelty of these computational mutagenesis based predictor attributes distinguishes the models as orthogonal and complementary to previous methods that utilize sequence
Jangwon Suh
2016-12-01
Full Text Available The use of portable X-ray fluorescence (PXRF and inductively coupled plasma atomic emission spectrometry (ICP-AES increases the rapidity and accuracy of soil contamination mapping, respectively. In practice, it is often necessary to repeat the soil contamination assessment and mapping procedure several times during soil management within a limited budget. In this study, we have developed a rapid, inexpensive, and accurate soil contamination mapping method using a PXRF data and geostatistical spatial interpolation. To obtain a large quantity of high quality data for interpolation, in situ PXRF data analyzed at 40 points were transformed to converted PXRF data using the correlation between PXRF and ICP-AES data. The method was applied to an abandoned mine site in Korea to generate a soil contamination map for copper and was validated for investigation speed and prediction accuracy. As a result, regions that required soil remediation were identified. Our method significantly shortened the time required for mapping compared to the conventional mapping method and provided copper concentration estimates with high accuracy similar to those measured by ICP-AES. Therefore, our method is an effective way of mapping soil contamination if we consistently construct a database based on the correlation between PXRF and ICP-AES data.
Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie
2017-09-01
Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.
Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models
Stovgaard, Kasper; Andreetta, Christian; Ferkinghoff-Borg, Jesper
2010-01-01
, which is paramount for structure determination based on statistical inference. Results: We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids......DBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for use in statistical inference of protein structures from SAXS data....
Schiffler, Ralf
2014-01-01
This book is intended to serve as a textbook for a course in Representation Theory of Algebras at the beginning graduate level. The text has two parts. In Part I, the theory is studied in an elementary way using quivers and their representations. This is a very hands-on approach and requires only basic knowledge of linear algebra. The main tool for describing the representation theory of a finite-dimensional algebra is its Auslander-Reiten quiver, and the text introduces these quivers as early as possible. Part II then uses the language of algebras and modules to build on the material developed before. The equivalence of the two approaches is proved in the text. The last chapter gives a proof of Gabriel’s Theorem. The language of category theory is developed along the way as needed.
Photography not only represents space. Space is produced photographically. Since its inception in the 19th century, photography has brought to light a vast array of represented subjects. Always situated in some spatial order, photographic representations have been operatively underpinned by social...... to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...... possibilities, and genre distinctions. Presenting several distinct ways of producing space photographically, this book opens a new and important field of inquiry for photography research....
Karpilovsky, G
1994-01-01
This third volume can be roughly divided into two parts. The first part is devoted to the investigation of various properties of projective characters. Special attention is drawn to spin representations and their character tables and to various correspondences for projective characters. Among other topics, projective Schur index and projective representations of abelian groups are covered. The last topic is investigated by introducing a symplectic geometry on finite abelian groups. The second part is devoted to Clifford theory for graded algebras and its application to the corresponding theory
Rasmussen, Majken Kirkegaard; Petersen, Marianne Graves
2011-01-01
Stereotypic presumptions about gender affect the design process, both in relation to how users are understood and how products are designed. As a way to decrease the influence of stereotypic presumptions in design process, we propose not to disregard the aspect of gender in the design process......, as the perspective brings valuable insights on different approaches to technology, but instead to view gender through a value lens. Contributing to this perspective, we have developed Value Representations as a design-oriented instrument for staging a reflective dialogue with users. Value Representations...
LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA
T. Dokken
2015-08-01
Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.
Mullins, Michael
Contemporary communicational and informational processes contribute to the shaping of our physical environment by having a powerful influence on the process of design. Applications of virtual reality (VR) are transforming the way architecture is conceived and produced by introducing dynamic...... elements into the process of design. Through its immersive properties, virtual reality allows access to a spatial experience of a computer model very different to both screen based simulations as well as traditional forms of architectural representation. The dissertation focuses on processes of the current...... representation? How is virtual reality used in public participation and how do virtual environments affect participatory decision making? How does VR thus affect the physical world of built environment? Given the practical collaborative possibilities of immersive technology, how can they best be implemented...
Efficient and accurate calibration for radio interferometers
Kazemi, Sanaz
2013-01-01
Optische telescopen hebben detectoren die gevoelig zijn voor individuele fotonen die het detectoroppervlak raken. Hiermee kan de helderheid van een lichtbron aan de hemel gemeten worden. Optische telescopen zijn gevoelig voor fotonen met een golflengte tussen de 400 nanometer (paars) en 700
Attention and Representational Momentum
Hayes, Amy; Freyd, Jennifer J
1995-01-01
Representational momentum, the tendency for memory to be distorted in the direction of an implied transformation, suggests that dynamics are an intrinsic part of perceptual representations. We examined the effect of attention on dynamic representation by testing for representational momentum under conditions of distraction. Forward memory shifts increase when attention is divided. Attention may be involved in halting but not in maintaining dynamic representations.
Vietnamese Document Representation and Classification
Nguyen, Giang-Son; Gao, Xiaoying; Andreae, Peter
Vietnamese is very different from English and little research has been done on Vietnamese document classification, or indeed, on any kind of Vietnamese language processing, and only a few small corpora are available for research. We created a large Vietnamese text corpus with about 18000 documents, and manually classified them based on different criteria such as topics and styles, giving several classification tasks of different difficulty levels. This paper introduces a new syllable-based document representation at the morphological level of the language for efficient classification. We tested the representation on our corpus with different classification tasks using six classification algorithms and two feature selection techniques. Our experiments show that the new representation is effective for Vietnamese categorization, and suggest that best performance can be achieved using syllable-pair document representation, an SVM with a polynomial kernel as the learning algorithm, and using Information gain and an external dictionary for feature selection.
Virtuani, A; Rigamonti, G; Friesen, G; Chianese, D; Beljean, P
2012-01-01
Performance testing of highly efficient, highly capacitive c-Si modules with pulsed solar simulators requires particular care. These devices in fact usually require a steady-state solar simulator or pulse durations longer than 100–200 ms in order to avoid measurement artifacts. The aim of this work was to validate an alternative method for the testing of highly capacitive c-Si modules using a 10 ms single pulse solar simulator. Our approach attempts to reconstruct a quasi-steady-state I–V (current–voltage) curve of a highly capacitive device during one single 10 ms flash by applying customized voltage profiles–-in place of a conventional V ramp—to the terminals of the device under test. The most promising results were obtained by using V profiles which we name ‘dragon-back’ (DB) profiles. When compared to the reference I–V measurement (obtained by using a multi-flash approach with approximately 20 flashes), the DB V profile method provides excellent results with differences in the estimation of P max (as well as of I sc , V oc and FF) below ±0.5%. For the testing of highly capacitive devices the method is accurate, fast (two flashes—possibly one—required), cost-effective and has proven its validity with several technologies making it particularly interesting for in-line testing. (paper)
Canadian consumer issues in accurate and fair electricity metering
2000-07-01
The Public Interest Advocacy Centre (PIAC), located in Ottawa, participates in regulatory proceedings concerning electricity and natural gas to support public and consumer interest. PIAC provides legal representation, research and policy support and public advocacy. A study aimed toward the determination of the issues at stake for residential electricity consumers in the provision of fair and accurate electricity metering, was commissioned by Measurement Canada in consultation with Industry Canada's Consumer Affairs. The metering of electricity must be carried out in a fair and efficient manner for all residential consumers. The Electricity, Gas and Inspection Act was developed to ensure compliance with standards for measuring instrumentation. The accurate metering of electricity through the distribution systems for electricity in Canada represents the main focus of this study and report. The role played by Measurement Canada and the increased efficiencies of service delivery by Measurement Canada or the changing of electricity market conditions are of special interest. The role of Measurement Canada was explained, as were the concerns of residential consumers. A comparison was then made between the interests of residential consumers and those of commercial and industrial electricity consumers in electricity metering. Selected American and Commonwealth jurisdictions were reviewed in light of their electricity metering practices. A section on compliance and conflict resolution was included, in addition to a section on the use of voluntary codes for compliance and conflict resolution
Semantic representation of reported measurements in radiology.
Oberkampf, Heiner; Zillner, Sonja; Overton, James A; Bauer, Bernhard; Cavallaro, Alexander; Uder, Michael; Hammon, Matthias
2016-01-22
efficient comparison of measured findings from consecutive examinations. The implementation of RECIST guidelines with SPARQL enhances the quality of the selection and comparison of target lesions as well as the corresponding treatment response evaluation. The developed MCI enables an accurate integrated representation of reported measurements and medical knowledge. Thus, measurements can be automatically classified and integrated in different decision processes. The structured representation is suitable for improved integration of clinical findings during decision-making. The proposed ReportViewer provides a longitudinal overview of the measurements.
When Is Network Lasso Accurate?
Alexander Jung
2018-01-01
Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.
Factorizations and physical representations
Revzen, M; Khanna, F C; Mann, A; Zak, J
2006-01-01
A Hilbert space in M dimensions is shown explicitly to accommodate representations that reflect the decomposition of M into prime numbers. Representations that exhibit the factorization of M into two relatively prime numbers: the kq representation (Zak J 1970 Phys. Today 23 51), and related representations termed q 1 q 2 representations (together with their conjugates) are analysed, as well as a representation that exhibits the complete factorization of M. In this latter representation each quantum number varies in a subspace that is associated with one of the prime numbers that make up M
Spectrally accurate contour dynamics
Van Buskirk, R.D.; Marcus, P.S.
1994-01-01
We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use
Soldatov, A.; Seke, J.; Adam, G.; Polak, M.
2008-01-01
Full text: A closed analytic form for relativistic transition matrix elements between bound-bound, bound-unbound and unbound-unbound relativistic eigenstates of hydrogenic atoms by using the plane-wave expansion for the electromagnetic-field vector potential was derived in a form convenient for large-scale numerical calculations in QED. By applying the obtained formulae, these transition matrix elements can be evaluated analytically and numerically. These exact matrix elements, which to our knowledge have not been calculated as yet, are of great importance in the analysis of various atom-field interaction processes where retardation effects cannot be ignored. The ultimate goal of the ongoing research is to develop a general universal calculation technique for Seke's approximation and renormalization method in QED, for which the usage of the plane vector expansion for the vector potential is a preferable choice. However, our primary interest lies in the Lamb-shift calculation. Our nearest objective is to carry out the plain-style relativistic calculations of the Lamb shift of the energy levels of hydrogen-like atoms and ions from first principles in the second and higher perturbative orders, using the corresponding convenient as well as novel expressions for the magnitude in question as they stand, i.e. without any additional approximations. Due to that there is no way to achieve all the above-declared goals without recourse to large-scale laborious and time-consuming high-precision numerical calculations, having the transition matrix elements of all possible types in an analytic, convenient for their efficient numerical evaluation form, would be highly advantageous and even unavoidable, especially for calculations of various QED effects in higher perturbative orders be it, equally, in traditional or novel approach. (author)
Efficient alignment-free DNA barcode analytics.
Kuksa, Pavel; Pavlovic, Vladimir
2009-11-10
In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; Lilienfeld, O. Anatole von; Müller, Klaus-Robert; Tkatchenko, Alexandre
2015-01-01
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the 'holy grail' of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies
Rumelhart, David E.; Norman, Donald A.
This paper reviews work on the representation of knowledge from within psychology and artificial intelligence. The work covers the nature of representation, the distinction between the represented world and the representing world, and significant issues concerned with propositional, analogical, and superpositional representations. Specific topics…
Efficient Computations and Representations of Visible Surfaces.
1979-12-01
position as stated. The smooth contour generator may lie along a sharp ridge, for instance. Richards & Stevens -28- 6m lace contout s ?S ,.......... ceoonec...From understanding computation to understanding neural circuitry. Neurosci. Res. Prog. Bull. 13. 470-488. Metelli, F. 1970 An algebraic development of
Efficient Representation for Online Suffix Tree Construction
Larsson, N. Jesper; Fuglsang, Kasper; Karlsson, Kenneth
2014-01-01
of branch lookup operations (known to be a bottleneck in construction time) with some additional techniques to reduce construction cost. We discuss various effects of our approach and compare it to previous techniques. An experimental evaluation shows that we are able to reduce construction time to around...
An accurate projection algorithm for array processor based SPECT systems
King, M.A.; Schwinger, R.B.; Cool, S.L.
1985-01-01
A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT
3D ear identification based on sparse representation.
Lin Zhang
Full Text Available Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm.
Accurate quantum chemical calculations
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Representation of heading direction in far and near head space
Poljac, E.; Berg, A.V. van den
2003-01-01
Manipulation of objects around the head requires an accurate and stable internal representation of their locations in space, also during movements such as that of the eye or head. For far space, the representation of visual stimuli for goal-directed arm movements relies on retinal updating, if eye
Understanding representations in design
Bødker, Susanne
1998-01-01
Representing computer applications and their use is an important aspect of design. In various ways, designers need to externalize design proposals and present them to other designers, users, or managers. This article deals with understanding design representations and the work they do in design....... The article is based on a series of theoretical concepts coming out of studies of scientific and other work practices and on practical experiences from design of computer applications. The article presents alternatives to the ideas that design representations are mappings of present or future work situations...... and computer applications. It suggests that representations are primarily containers of ideas and that representation is situated at the same time as representations are crossing boundaries between various design and use activities. As such, representations should be carriers of their own contexts regarding...
Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models
Stovgaard Kasper
2010-08-01
Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for
Willett, Wesley; Jansen, Yvonne; Dragicevic, Pierre
2017-01-01
We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles...
Group and representation theory
Vergados, J D
2017-01-01
This volume goes beyond the understanding of symmetries and exploits them in the study of the behavior of both classical and quantum physical systems. Thus it is important to study the symmetries described by continuous (Lie) groups of transformations. We then discuss how we get operators that form a Lie algebra. Of particular interest to physics is the representation of the elements of the algebra and the group in terms of matrices and, in particular, the irreducible representations. These representations can be identified with physical observables. This leads to the study of the classical Lie algebras, associated with unitary, unimodular, orthogonal and symplectic transformations. We also discuss some special algebras in some detail. The discussion proceeds along the lines of the Cartan-Weyl theory via the root vectors and root diagrams and, in particular, the Dynkin representation of the roots. Thus the representations are expressed in terms of weights, which are generated by the application of the elemen...
Introduction to representation theory
Etingof, Pavel; Hensel, Sebastian; Liu, Tiankai; Schwendner, Alex
2011-01-01
Very roughly speaking, representation theory studies symmetry in linear spaces. It is a beautiful mathematical subject which has many applications, ranging from number theory and combinatorics to geometry, probability theory, quantum mechanics, and quantum field theory. The goal of this book is to give a "holistic" introduction to representation theory, presenting it as a unified subject which studies representations of associative algebras and treating the representation theories of groups, Lie algebras, and quivers as special cases. Using this approach, the book covers a number of standard topics in the representation theories of these structures. Theoretical material in the book is supplemented by many problems and exercises which touch upon a lot of additional topics; the more difficult exercises are provided with hints. The book is designed as a textbook for advanced undergraduate and beginning graduate students. It should be accessible to students with a strong background in linear algebra and a basic k...
The abstract representations in speech processing.
Cutler, Anne
2008-11-01
Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.
Mental models accurately predict emotion transitions.
Thornton, Mark A; Tamir, Diana I
2017-06-06
Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.
The accurate particle tracer code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun
2017-11-01
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.
Zieliński, Tomasz G.
2017-11-01
The paper proposes and investigates computationally-efficient microstructure representations for sound absorbing fibrous media. Three-dimensional volume elements involving non-trivial periodic arrangements of straight fibres are examined as well as simple two-dimensional cells. It has been found that a simple 2D quasi-representative cell can provide similar predictions as a volume element which is in general much more geometrically accurate for typical fibrous materials. The multiscale modelling allowed to determine the effective speeds and damping of acoustic waves propagating in such media, which brings up a discussion on the correlation between the speed, penetration range and attenuation of sound waves. Original experiments on manufactured copper-wire samples are presented and the microstructure-based calculations of acoustic absorption are compared with the corresponding experimental results. In fact, the comparison suggested the microstructure modifications leading to representations with non-uniformly distributed fibres.
Covariant representations of nuclear *-algebras
Moore, S.M.
1978-01-01
Extensions of the Csup(*)-algebra theory for covariant representations to nuclear *-algebra are considered. Irreducible covariant representations are essentially unique, an invariant state produces a covariant representation with stable vacuum, and the usual relation between ergodic states and covariant representations holds. There exist construction and decomposition theorems and a possible relation between derivations and covariant representations
Variationally derived coarse mesh methods using an alternative flux representation
Wojtowicz, G.; Holloway, J.P.
1995-01-01
Investigation of a previously reported variational technique for the solution of the 1-D, 1-group neutron transport equation in reactor lattices has inspired the development of a finite element formulation of the method. Compared to conventional homogenization methods in which node homogenized cross sections are used, the coefficients describing this system take on greater spatial dependence. However, the methods employ an alternative flux representation which allows the transport equation to be cast into a form whose solution has only a slow spatial variation and, hence, requires relatively few variables to describe. This alternative flux representation and the stationary property of a variational principle define a class of coarse mesh discretizations of transport theory capable of achieving order of magnitude reductions of eigenvalue and pointwise scalar flux errors as compared with diffusion theory while retaining diffusion theory's relatively low cost. Initial results of a 1-D spectral element approach are reviewed and used to motivate the finite element implementation which is more efficient and almost as accurate; one and two group results of this method are described
Koťátko, Petr
2014-01-01
Roč. 21, č. 3 (2014), s. 282-302 ISSN 1335-0668 Institutional support: RVO:67985955 Keywords : representation * proposition * truth-conditions * belief-ascriptions * reference * externalism * fiction Subject RIV: AA - Philosophy ; Religion
Wigner's Symmetry Representation Theorem
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 10. Wigner's Symmetry Representation Theorem: At the Heart of Quantum Field Theory! Aritra Kr Mukhopadhyay. General Article Volume 19 Issue 10 October 2014 pp 900-916 ...
Boundary representation modelling techniques
2006-01-01
Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.
Polynomial representations of GLn
Green, James A; Erdmann, Karin
2007-01-01
The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.
Polynomial representations of GLN
Green, James A
1980-01-01
The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.
Procedural Media Representation
Henrysson, Anders
2002-01-01
We present a concept for using procedural techniques to represent media. Procedural methods allow us to represent digital media (2D images, 3D environments etc.) with very little information and to render it photo realistically. Since not all kind of content can be created procedurally, traditional media representations (bitmaps, polygons etc.) must be used as well. We have adopted an object-based media representation where an object can be represented either with a procedure or with its trad...
Wavelet representation of the nuclear dynamics
Jouault, B.; Sebille, F.; Mota, V. de la
1997-12-31
The study of transport phenomena in nuclear matter is addressed in a new approach named DYWAN, based on the projection methods of statistical physics and on the mathematical theory of wavelets. Strongly compressed representations of the nuclear systems are obtained with an accurate description of the wave functions and of their antisymmetrization. The results of the approach are illustrated for the ground state description as well as for the dissipative dynamics of nuclei at intermediate energies. (K.A.). 52 refs.
Wavelet representation of the nuclear dynamics
Jouault, B.; Sebille, F.; Mota, V. de la.
1997-01-01
The study of transport phenomena in nuclear matter is addressed in a new approach named DYWAN, based on the projection methods of statistical physics and on the mathematical theory of wavelets. Strongly compressed representations of the nuclear systems are obtained with an accurate description of the wave functions and of their antisymmetrization. The results of the approach are illustrated for the ground state description as well as for the dissipative dynamics of nuclei at intermediate energies. (K.A.)
The spatial representation of market information
DeSarbo, WS; Degeratu, AM; Wedel, M; Saxton, MK
2001-01-01
To be used effectively, market knowledge and information must be structured and represented in ways that are parsimonious and conducive to efficient managerial decision making. This manuscript proposes a new latent structure spatial model for the representation of market information that meets this
Knowledge representation and use. II. Representations
Lauriere, J L
1982-03-01
The use of computers is less and less restricted to numerical and data processing. On the other hand, current software mostly contains algorithms on universes with complete information. The paper discusses a different family of programs: expert systems are designed as aids in human reasoning in various specific areas. Symbolic knowledge manipulation, uncertain and incomplete deduction capabilities, natural communication with humans in non-procedural ways are their essential features. This part is mainly a reflection and a debate about the various modes of acquisition and representation of human knowledge. 32 references.
Operator representations of frames
Christensen, Ole; Hasannasab, Marzieh
2017-01-01
of the properties of the operator T requires more work. For example it is a delicate issue to obtain a representation with a bounded operator, and the availability of such a representation not only depends on the frame considered as a set, but also on the chosen indexing. Using results from operator theory we show......The purpose of this paper is to consider representations of frames {fk}k∈I in a Hilbert space ℋ of the form {fk}k∈I = {Tkf0}k∈I for a linear operator T; here the index set I is either ℤ or ℒ0. While a representation of this form is available under weak conditions on the frame, the analysis...... that by embedding the Hilbert space ℋ into a larger Hilbert space, we can always represent a frame via iterations of a bounded operator, composed with the orthogonal projection onto ℋ. The paper closes with a discussion of an open problem concerning representations of Gabor frames via iterations of a bounded...
Representation Elements of Spatial Thinking
Fiantika, F. R.
2017-04-01
This paper aims to add a reference in revealing spatial thinking. There several definitions of spatial thinking but it is not easy to defining it. We can start to discuss the concept, its basic a forming representation. Initially, the five sense catch the natural phenomenon and forward it to memory for processing. Abstraction plays a role in processing information into a concept. There are two types of representation, namely internal representation and external representation. The internal representation is also known as mental representation; this representation is in the human mind. The external representation may include images, auditory and kinesthetic which can be used to describe, explain and communicate the structure, operation, the function of the object as well as relationships. There are two main elements, representations properties and object relationships. These elements play a role in forming a representation.
Mobilities and Representations
Thelle, Mikkel
2017-01-01
to consider how they and their peers are currently confronting representations of mobility. This is particularly timely given the growing academic focus on practices, material mediation, and nonrepresentational theories, as well as on bodily reactions, emotions, and feelings that, according to those theories......As the centerpiece of the eighth T2M yearbook, the following interview about representations of mobility signals a new and exciting focus area for Mobility in History. In future issues we hope to include reviews that grapple more with how mobilities have been imagined and represented in the arts......, literature, and film. Moreover, we hope the authors of future reviews will reflect on the ways they approached those representations. Such commentaries would provide valuable methodological insights, and we hope to begin that effort with this interview. We have asked four prominent mobility scholars...
Roberto De Rubertis
2012-06-01
Full Text Available This article will discuss about the physiological genesis of representation and then it will illustrate the developments, especially in evolutionary perspective, and it will show how these are mainly a result of accidental circumstances, rather than of deliberate intention of improvement. In particular, it will be argue that the representation has behaved like a meme that has arrived to its own progressive evolution coming into symbiosis with the different cultures in which it has spread, and using in this activity human work “unconsciously”. Finally it will be shown how in this action the geometry is an element key, linked to representation both to construct images using graphics operations and to erect buildings using concrete operations.
Post-representational cartography
Rob Kitchin
2010-03-01
Full Text Available Over the past decade there has been a move amongst critical cartographers to rethink maps from a post-representational perspective – that is, a vantage point that does not privilege representational modes of thinking (wherein maps are assumed to be mirrors of the world and automatically presumes the ontological security of a map as a map, but rather rethinks and destabilises such notions. This new theorisation extends beyond the earlier critiques of Brian Harley (1989 that argued maps were social constructions. For Harley a map still conveyed the truth of a landscape, albeit its message was bound within the ideological frame of its creator. He thus advocated a strategy of identifying the politics of representation within maps in order to circumnavigate them (to reveal the truth lurking underneath, with the ontology of cartographic practice remaining unquestioned.
Introduction to computer data representation
Fenwick, Peter
2014-01-01
Introduction to Computer Data Representation introduces readers to the representation of data within computers. Starting from basic principles of number representation in computers, the book covers the representation of both integer and floating point numbers, and characters or text. It comprehensively explains the main techniques of computer arithmetic and logical manipulation. The book also features chapters covering the less usual topics of basic checksums and 'universal' or variable length representations for integers, with additional coverage of Gray Codes, BCD codes and logarithmic repre
Representation Discovery using Harmonic Analysis
Mahadevan, Sridhar
2008-01-01
Representations are at the heart of artificial intelligence (AI). This book is devoted to the problem of representation discovery: how can an intelligent system construct representations from its experience? Representation discovery re-parameterizes the state space - prior to the application of information retrieval, machine learning, or optimization techniques - facilitating later inference processes by constructing new task-specific bases adapted to the state space geometry. This book presents a general approach to representation discovery using the framework of harmonic analysis, in particu
Additive and polynomial representations
Krantz, David H; Suppes, Patrick
1971-01-01
Additive and Polynomial Representations deals with major representation theorems in which the qualitative structure is reflected as some polynomial function of one or more numerical functions defined on the basic entities. Examples are additive expressions of a single measure (such as the probability of disjoint events being the sum of their probabilities), and additive expressions of two measures (such as the logarithm of momentum being the sum of log mass and log velocity terms). The book describes the three basic procedures of fundamental measurement as the mathematical pivot, as the utiliz
Hoff da Silva, J.M.; Rogerio, R.J.B. [Universidade Estadual Paulista, Departamento de Fisica e Quimica, Guaratingueta, SP (Brazil); Villalobos, C.H.C. [Universidade Estadual Paulista, Departamento de Fisica e Quimica, Guaratingueta, SP (Brazil); Universidade Federal Fluminense, Instituto de Fisica, Niteroi, RJ (Brazil); Rocha, Roldao da [Universidade Federal do ABC-UFABC, Centro de Matematica, Computacao e Cognicao, Santo Andre (Brazil)
2017-07-15
A systematic study of the spinor representation by means of the fermionic physical space is accomplished and implemented. The spinor representation space is shown to be constrained by the Fierz-Pauli-Kofink identities among the spinor bilinear covariants. A robust geometric and topological structure can be manifested from the spinor space, wherein the first and second homotopy groups play prominent roles on the underlying physical properties, associated to fermionic fields. The mapping that changes spinor fields classes is then exemplified, in an Einstein-Dirac system that provides the spacetime generated by a fermion. (orig.)
Accurate Evaluation of Quantum Integrals
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Multiscale Methods for Accurate, Efficient, and Scale-Aware Models
Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)
2017-07-14
The goal of UWM’s portion of the Multiscale project was to develop a unified cloud parameterization that could simulate all cloud types --- including stratocumulus, shallow cumulus, and deep cumulus --- using the single equation set implemented in CLUBB. An advantage of a unified parameterization methodology is that it avoids the difficult task of interfacing different cloud parameterizations for different cloud types. To interface CLUBB’s clouds to the microphysics, a Monte Carlo interface, SILHS, was further developed.
Design and Implementation of Accurate and Efficient Pocket Dosimeter
Shehata, S.A.; Abdelkhalek, K.L.
2005-01-01
It is so important in the field of radiation therapy and radiation protection to have dosimeters to determine the absorbed dose, which is transferred to human body by ionizing radiation. In this paper the design and implementation of a wide-range pocket dosimeter (PKD-1) with high accuracy to measure personal equivalent dose and dose rate of gamma radiation will be presented. This pocket dosimeter is micro controller-based and powered from 9 V rechargeable battery. The overall power consumption is significantly reduced by smart software and hardware design allowing longer time intervals between recharging. The integrated alphanumerical LCD displays not only of the accumulated dose and current dose rate, but also displays alarm messages such as low battery. For reasons of power saving the LCD is activated on demand by pressing the push button or automatically when an alarm occurs. Audible and visual alarms have been added to PKD-1 in order that they cannot be accidentally overlooked or ignored. PKD-1 can be connected to any PC through its serial port (RS232) and User Interface software has been developed for easy displaying and recording of radiation readings over any time period
An efficient and accurate decomposition of the Fermi operator.
Ceriotti, Michele; Kühne, Thomas D; Parrinello, Michele
2008-07-14
We present a method to compute the Fermi function of the Hamiltonian for a system of independent fermions based on an exact decomposition of the grand-canonical potential. This scheme does not rely on the localization of the orbitals and is insensitive to ill-conditioned Hamiltonians. It lends itself naturally to linear scaling as soon as the sparsity of the system's density matrix is exploited. By using a combination of polynomial expansion and Newton-like iterative techniques, an arbitrarily large number of terms can be employed in the expansion, overcoming some of the difficulties encountered in previous papers. Moreover, this hybrid approach allows us to obtain a very favorable scaling of the computational cost with increasing inverse temperature, which makes the method competitive with other Fermi operator expansion techniques. After performing an in-depth theoretical analysis of computational cost and accuracy, we test our approach on the density functional theory Hamiltonian for the metallic phase of the LiAl alloy.
Accurate and efficient computation of synchrotron radiation functions
MacLeod, Allan J.
2000-01-01
We consider the computation of three functions which appear in the theory of synchrotron radiation. These are F(x)=x∫x∞K 5/3 (y) dy))F p (x)=xK 2/3 (x) and G p (x)=x 1/3 K 1/3 (x), where K ν denotes a modified Bessel function. Chebyshev series coefficients are given which enable the functions to be computed with an accuracy of up to 15 sig. figures
Efficient and Accurate WLAN Positioning with Weighted Graphs
Hansen, Rene; Thomsen, Bent
2009-01-01
This paper concerns indoor location determination by using existing WLAN infrastructures and WLAN enabled mobile devices. The location fingerprinting technique performs localization by first constructing a radio map of signal strengths from nearby access points. The radio map is subsequently...
Going beyond representational anthropology
Winther, Ida Wentzel
Going beyond representational anthropology: Re-presenting bodily, emotional and virtual practices in everyday life. Separated youngsters and families in Greenland Greenland is a huge island, with a total of four high-schools. Many youngsters (age 16-18) move far away from home in order to get...
Reflection on Political Representation
Kusche, Isabel
2017-01-01
This article compares how Members of Parliament in the United Kingdom and Ireland reflect on constituency service as an aspect of political representation. It differs from existing research on the constituency role of MPs in two regards. First, it approaches the question from a sociological viewp...
Social representations about cancer
Andreja Cirila Škufca
2003-09-01
Full Text Available In this article we are presenting the results of the comparison study on social representations and causal attributions about cancer. We compared a breast cancer survivors group and control group without own experience of cancer of their own. Although social representations about cancer differ in each group, they are closely related to the concept of suffering, dying and death. We found differences in causal attribution of cancer. In both groups we found a category of risky behavior, which attributes a responsibility for a disease to an individual. Besides these factors we found predominate stress and psychological influences in cancer survivors group. On the other hand control group indicated factors outside the ones control e.g. heredity and environmental factors. Representations about a disease inside person's social space are important in co-shaping the individual process of coping with own disease. Since these representations are not always coherent with the knowledge of modern medicine their knowledge and appreciation in the course of treatment is of great value. We find the findingss of applied social psychology important as starting points in the therapeutic work with patients.
Tervo, Juuso
2012-01-01
In "Postphysical Vision: Art Education's Challenge in an Age of Globalized Aesthetics (AMondofesto)" (2008) and "Beyond Aesthetics: Returning Force and Truth to Art and Its Education" (2009), jan jagodzinski argued for politics that go "beyond" representation--a project that radically questions visual culture…
Women and political representation.
Rathod, P B
1999-01-01
A remarkable progress in women's participation in politics throughout the world was witnessed in the final decade of the 20th century. According to the Inter-Parliamentary Union report, there were only eight countries with no women in their legislatures in 1998. The number of women ministers at the cabinet level worldwide doubled in a decade, and the number of countries without any women ministers dropped from 93 to 48 during 1987-96. However, this progress is far from satisfactory. Political representation of women, minorities, and other social groups is still inadequate. This may be due to a complex combination of socioeconomic, cultural, and institutional factors. The view that women's political participation increases with social and economic development is supported by data from the Nordic countries, where there are higher proportions of women legislators than in less developed countries. While better levels of socioeconomic development, having a women-friendly political culture, and higher literacy are considered favorable factors for women's increased political representation, adopting one of the proportional representation systems (such as a party-list system, a single transferable vote system, or a mixed proportional system with multi-member constituencies) is the single factor most responsible for the higher representation of women.
Multi-representation ability of students on the problem solving physics
Theasy, Y.; Wiyanto; Sujarwata
2018-03-01
Accuracy in representing knowledge possessed by students will show how the level of student understanding. The multi-representation ability of students on the problem solving of physics has been done through qualitative method of grounded theory model and implemented on physics education student of Unnes academic year 2016/2017. Multiforms of representation used are verbal (V), images/diagrams (D), graph (G), and mathematically (M). High and low category students have an accurate use of graphical representation (G) of 83% and 77.78%, and medium category has accurate use of image representation (D) equal to 66%.
Cohen, Adam S; Sasaki, Joni Y; German, Tamsin C
2015-03-01
Does theory of mind depend on a capacity to reason about representations generally or on mechanisms selective for the processing of mental state representations? In four experiments, participants reasoned about beliefs (mental representations) and notes (non-mental, linguistic representations), which according to two prominent theories are closely matched representations because both are represented propositionally. Reaction times were faster and accuracies higher when participants endorsed or rejected statements about false beliefs than about false notes (Experiment 1), even when statements emphasized representational format (Experiment 2), which should have favored the activation of representation concepts. Experiments 3 and 4 ruled out a counterhypothesis that differences in task demands were responsible for the advantage in belief processing. These results demonstrate for the first time that understanding of mental and linguistic representations can be dissociated even though both may carry propositional content, supporting the theory that mechanisms governing theory of mind reasoning are narrowly specialized to process mental states, not representations more broadly. Extending this theory, we discuss whether less efficient processing of non-mental representations may be a by-product of mechanisms specialized for processing mental states. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
In defense of abstract conceptual representations.
Binder, Jeffrey R
2016-08-01
An extensive program of research in the past 2 decades has focused on the role of modal sensory, motor, and affective brain systems in storing and retrieving concept knowledge. This focus has led in some circles to an underestimation of the need for more abstract, supramodal conceptual representations in semantic cognition. Evidence for supramodal processing comes from neuroimaging work documenting a large, well-defined cortical network that responds to meaningful stimuli regardless of modal content. The nodes in this network correspond to high-level "convergence zones" that receive broadly crossmodal input and presumably process crossmodal conjunctions. It is proposed that highly conjunctive representations are needed for several critical functions, including capturing conceptual similarity structure, enabling thematic associative relationships independent of conceptual similarity, and providing efficient "chunking" of concept representations for a range of higher order tasks that require concepts to be configured as situations. These hypothesized functions account for a wide range of neuroimaging results showing modulation of the supramodal convergence zone network by associative strength, lexicality, familiarity, imageability, frequency, and semantic compositionality. The evidence supports a hierarchical model of knowledge representation in which modal systems provide a mechanism for concept acquisition and serve to ground individual concepts in external reality, whereas broadly conjunctive, supramodal representations play an equally important role in concept association and situation knowledge.
Mostafavi, Kamal; Tutunea-Fatan, O Remus; Bordatchev, Evgueni V; Johnson, James A
2014-12-01
The strong advent of computer-assisted technologies experienced by the modern orthopedic surgery prompts for the expansion of computationally efficient techniques to be built on the broad base of computer-aided engineering tools that are readily available. However, one of the common challenges faced during the current developmental phase continues to remain the lack of reliable frameworks to allow a fast and precise conversion of the anatomical information acquired through computer tomography to a format that is acceptable to computer-aided engineering software. To address this, this study proposes an integrated and automatic framework capable to extract and then postprocess the original imaging data to a common planar and closed B-Spline representation. The core of the developed platform relies on the approximation of the discrete computer tomography data by means of an original two-step B-Spline fitting technique based on successive deformations of the control polygon. In addition to its rapidity and robustness, the developed fitting technique was validated to produce accurate representations that do not deviate by more than 0.2 mm with respect to alternate representations of the bone geometry that were obtained through different-contact-based-data acquisition or data processing methods. © IMechE 2014.
Towards accurate emergency response behavior
Sargent, T.O.
1981-01-01
Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail
Ali Bagheri, Mohammad; Gao, Qigang; Guerrero, Sergio Escalera
2015-01-01
the performance of an ensemble of action learning techniques, each performing the recognition task from a different per- spective. The underlying idea is that instead of aiming a very sophisticated and powerful representation/learning technique, we can learn action categories using a set of relatively simple...... to improve the recognition perfor- mance, a powerful combination strategy is utilized based on the Dempster-Shafer theory, which can effectively make use of diversity of base learners trained on different sources of information. The recognition results of the individual clas- sifiers are compared with those...... obtained from fusing the classifiers’ output, showing enhanced performance of the proposed methodology....
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware.
Daneels, Glenn; Municio, Esteban; Van de Velde, Bruno; Ergeerts, Glenn; Weyn, Maarten; Latré, Steven; Famaey, Jeroen
2018-02-02
The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.
The Interaction between Semantic Representation and Episodic Memory.
Fang, Jing; Rüther, Naima; Bellebaum, Christian; Wiskott, Laurenz; Cheng, Sen
2018-02-01
The experimental evidence on the interrelation between episodic memory and semantic memory is inconclusive. Are they independent systems, different aspects of a single system, or separate but strongly interacting systems? Here, we propose a computational role for the interaction between the semantic and episodic systems that might help resolve this debate. We hypothesize that episodic memories are represented as sequences of activation patterns. These patterns are the output of a semantic representational network that compresses the high-dimensional sensory input. We show quantitatively that the accuracy of episodic memory crucially depends on the quality of the semantic representation. We compare two types of semantic representations: appropriate representations, which means that the representation is used to store input sequences that are of the same type as those that it was trained on, and inappropriate representations, which means that stored inputs differ from the training data. Retrieval accuracy is higher for appropriate representations because the encoded sequences are less divergent than those encoded with inappropriate representations. Consistent with our model prediction, we found that human subjects remember some aspects of episodes significantly more accurately if they had previously been familiarized with the objects occurring in the episode, as compared to episodes involving unfamiliar objects. We thus conclude that the interaction with the semantic system plays an important role for episodic memory.
Standard model of knowledge representation
Yin, Wensheng
2016-09-01
Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.
Constructing visual representations
Huron, Samuel; Jansen, Yvonne; Carpendale, Sheelagh
2014-01-01
tangible building blocks. We learned that all participants, most of whom had little experience in visualization authoring, were readily able to create and talk about their own visualizations. Based on our observations, we discuss participants’ actions during the development of their visual representations......The accessibility of infovis authoring tools to a wide audience has been identified as a major research challenge. A key task in the authoring process is the development of visual mappings. While the infovis community has long been deeply interested in finding effective visual mappings......, comparatively little attention has been placed on how people construct visual mappings. In this paper, we present the results of a study designed to shed light on how people transform data into visual representations. We asked people to create, update and explain their own information visualizations using only...
Naturalising Representational Content
Shea, Nicholas
2014-01-01
This paper sets out a view about the explanatory role of representational content and advocates one approach to naturalising content – to giving a naturalistic account of what makes an entity a representation and in virtue of what it has the content it does. It argues for pluralism about the metaphysics of content and suggests that a good strategy is to ask the content question with respect to a variety of predictively successful information processing models in experimental psychology and cognitive neuroscience; and hence that data from psychology and cognitive neuroscience should play a greater role in theorising about the nature of content. Finally, the contours of the view are illustrated by drawing out and defending a surprising consequence: that individuation of vehicles of content is partly externalist. PMID:24563661
Knowledge Representation and Ontologies
Grimm, Stephan
Knowledge representation and reasoning aims at designing computer systems that reason about a machine-interpretable representation of the world. Knowledge-based systems have a computational model of some domain of interest in which symbols serve as surrogates for real world domain artefacts, such as physical objects, events, relationships, etc. [1]. The domain of interest can cover any part of the real world or any hypothetical system about which one desires to represent knowledge for com-putational purposes. A knowledge-based system maintains a knowledge base, which stores the symbols of the computational model in the form of statements about the domain, and it performs reasoning by manipulating these symbols. Applications can base their decisions on answers to domain-relevant questions posed to a knowledge base.
Europe representations in textbooks
Brennetot , Arnaud
2011-01-01
This EuroBroadMap working paper presents an analysis of textbooks dealing with the representations of Europe and European Union. In most of these textbooks from secondary school, the teaching of the geography of Europe precedes the evocation of the EU. Europe is often depicted as a given object, reduced to a number of structural aspects (relief, climate, demography, traditional cultures, economic activities, etc.) whose only common point is their location within conventional boundaries. Such ...
Jensen, Ole B.
2016-01-01
Dette kapitel gennemgår den såkaldte ”Non-Representational Theory” (NRT), der primært er kendt fra den Angelsaksiske humangeografi, og som særligt er blevet fremført af den engelske geograf Nigel Thrift siden midten af 2000 årtiet. Da positionen ikke kan siges at være specielt homogen vil kapitlet...
Harmonic Analysis and Group Representation
Figa-Talamanca, Alessandro
2011-01-01
This title includes: Lectures - A. Auslander, R. Tolimeri - Nilpotent groups and abelian varieties, M Cowling - Unitary and uniformly bounded representations of some simple Lie groups, M. Duflo - Construction de representations unitaires d'un groupe de Lie, R. Howe - On a notion of rank for unitary representations of the classical groups, V.S. Varadarajan - Eigenfunction expansions of semisimple Lie groups, and R. Zimmer - Ergodic theory, group representations and rigidity; and, Seminars - A. Koranyi - Some applications of Gelfand pairs in classical analysis.
Functional representations for quantized fields
Jackiw, R.
1988-01-01
This paper provides information on Representing transformations in quantum theory bosonic quantum field theories: Schrodinger Picture; Represnting Transformations in Bosonic Quantum Field Theory; Two-Dimensional Conformal Transformations, Schrodinger picture representation, Fock space representation, Inequivalent Schrodinger picture representations; Discussion, Self-Dual and Other Models; Field Theory in de Sitter Space. Fermionic Quantum Field Theories: Schroedinger Picture; Schrodinger Picture Representation for Two-Dimensional; Conformal Transformations; Fock Space Dynamics in the Schrodinger Picture; Fock Space Evaluation of Anomalous Current and Conformal Commutators
Pioneers of representation theory
Curtis, Charles W
1999-01-01
The year 1897 was marked by two important mathematical events: the publication of the first paper on representations of finite groups by Ferdinand Georg Frobenius (1849-1917) and the appearance of the first treatise in English on the theory of finite groups by William Burnside (1852-1927). Burnside soon developed his own approach to representations of finite groups. In the next few years, working independently, Frobenius and Burnside explored the new subject and its applications to finite group theory. They were soon joined in this enterprise by Issai Schur (1875-1941) and some years later, by Richard Brauer (1901-1977). These mathematicians' pioneering research is the subject of this book. It presents an account of the early history of representation theory through an analysis of the published work of the principals and others with whom the principals' work was interwoven. Also included are biographical sketches and enough mathematics to enable readers to follow the development of the subject. An introductor...
Cohen-Macaulay representations
Leuschke, Graham J
2012-01-01
This book is a comprehensive treatment of the representation theory of maximal Cohen-Macaulay (MCM) modules over local rings. This topic is at the intersection of commutative algebra, singularity theory, and representations of groups and algebras. Two introductory chapters treat the Krull-Remak-Schmidt Theorem on uniqueness of direct-sum decompositions and its failure for modules over local rings. Chapters 3-10 study the central problem of classifying the rings with only finitely many indecomposable MCM modules up to isomorphism, i.e., rings of finite CM type. The fundamental material--ADE/simple singularities, the double branched cover, Auslander-Reiten theory, and the Brauer-Thrall conjectures--is covered clearly and completely. Much of the content has never before appeared in book form. Examples include the representation theory of Artinian pairs and Burban-Drozd's related construction in dimension two, an introduction to the McKay correspondence from the point of view of maximal Cohen-Macaulay modules, Au...
Factors affecting the representation of objects in distributed attention
Maxwell, Tricia Lesley
2011-01-01
Our phenomenological experience of what we see around us is of an accurate representation. However, such information is widely distributed in the brain so necessitates that some form of co-ordination of this information takes place to enable a coherent view of the world. The most prominently researched theory is Feature Integration Theory (Treisman, 1993). This proposes that accurate binding is dependent on the current spatial distribution of attention. Individual objects compete for attentio...
Signal and image representation in combined spaces
Zeevi, Yehoshua; Chui, Charles K
1997-01-01
This volume explains how the recent advances in wavelet analysis provide new means for multiresolution analysis and describes its wide array of powerful tools. The book covers variations of the windowed Fourier transform, constructions of special waveforms suitable for specific tasks, the use of redundant representations in reconstruction and enhancement, applications of efficient numerical compression as a tool for fast numerical analysis, and approximation properties of various waveforms in different contexts.
Chao Yang
2018-03-01
Full Text Available An accurate and comprehensive representation of an observation task is a prerequisite in disaster monitoring to achieve reliable sensor observation planning. However, the extant disaster event or task information models do not fully satisfy the observation requirements for the accurate and efficient planning of remote-sensing satellite sensors. By considering the modeling requirements for a disaster observation task, we propose an observation task chain (OTChain representation model that includes four basic OTChain segments and eight-tuple observation task metadata description structures. A prototype system, namely OTChainManager, is implemented to provide functions for modeling, managing, querying, and visualizing observation tasks. In the case of flood water monitoring, we use a flood remote-sensing satellite sensor observation task for the experiment. The results show that the proposed OTChain representation model can be used in modeling process-owned flood disaster observation tasks. By querying and visualizing the flood observation task instances in the Jinsha River Basin, the proposed model can effectively express observation task processes, represent personalized observation constraints, and plan global remote-sensing satellite sensor observations. Compared with typical observation task information models or engines, the proposed OTChain representation model satisfies the information demands of the OTChain and its processes as well as impels the development of a long time-series sensor observation scheme.
A stiffly accurate integrator for elastodynamic problems
Michels, Dominik L.
2017-07-21
We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.
Advances in visual representation of molecular potentials.
Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen
2010-06-01
The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.
Hierarchical Representation Learning for Kinship Verification.
Kohli, Naman; Vatsa, Mayank; Singh, Richa; Noore, Afzel; Majumdar, Angshul
2017-01-01
Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d' , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification.
The Accurate Particle Tracer Code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi
2016-01-01
The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...
Deslattes, R.D.
1987-01-01
Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data
Categorification and higher representation theory
Beliakova, Anna
2017-01-01
The emergent mathematical philosophy of categorification is reshaping our view of modern mathematics by uncovering a hidden layer of structure in mathematics, revealing richer and more robust structures capable of describing more complex phenomena. Categorified representation theory, or higher representation theory, aims to understand a new level of structure present in representation theory. Rather than studying actions of algebras on vector spaces where algebra elements act by linear endomorphisms of the vector space, higher representation theory describes the structure present when algebras act on categories, with algebra elements acting by functors. The new level of structure in higher representation theory arises by studying the natural transformations between functors. This enhanced perspective brings into play a powerful new set of tools that deepens our understanding of traditional representation theory. This volume exhibits some of the current trends in higher representation theory and the diverse te...
Inversion of Auditory Spectrograms, Traditional Spectrograms, and Other Envelope Representations
Decorsière, Remi Julien Blaise; Søndergaard, Peter Lempel; MacDonald, Ewen
2015-01-01
Envelope representations such as the auditory or traditional spectrogram can be defined by the set of envelopes from the outputs of a filterbank. Common envelope extraction methods discard information regarding the fast fluctuations, or phase, of the signal. Thus, it is difficult to invert, or re...... to the framework is proposed, which leads to a more accurate inversion of traditional spectrograms...
Stereotypes and Representations of Aging in the Media
Mason, Susan E.; Darnell, Emily A.; Prifti, Krisiola
2010-01-01
How are older adults presented in print and in the electronic media? Are they underrepresented? Are they accurately portrayed? Based on our examination of several forms of media over a four-month period, we discuss the role of the media in shaping our views on aging. Quantitative and qualitative analyses reveal that media representations often…
Ngodock, Hans E; Smith, Scott R; Jacobs, Gregg A
2007-01-01
... (LCE) in the Gulf of Mexico. It was reported that the representer method was more accurate than its ensemble counterparts, yet it had difficulties fitting the data in the last month of the 4-month assimilation window...
Loddegaard, Anne
2012-01-01
out of place in a novel belonging to the serious combat literature of the Catholic Revival, and the direct representation of the supernatural is also surprising because previous Catholic Revival novelists, such as Léon Bloy and Karl-Joris Huysmans, maintain a realistic, non-magical world and deal...... Satan episode in Under Satan’s Sun is neither a break with the seriousness nor with the realism of the Catholic novel. On the basis of Tvetan Todorov’s definition of the traditional fantastic tale, the analysis shows that only the beginning of the fantastic episode follows Todorov’s definition...
Loddegaard, Anne
2009-01-01
out of place in a novel belonging to the serious combat literature of the Catholic Revival, and the direct representation of the supernatural is also surprising because previous Catholic Revival novelists, such as Léon Bloy and Karl-Joris Huysmans, maintain a realistic, non-magical world and deal...... Satan episode in Under Satan’s Sun is neither a break with the seriousness nor with the realism of the Catholic novel. On the basis of Tvetan Todorov’s definition of the traditional fantastic tale, the analysis shows that only the beginning of the fantastic episode follows Todorov’s definition...
Representations of commonsense knowledge
Davis, Ernest
1990-01-01
Representations of Commonsense Knowledge provides a rich language for expressing commonsense knowledge and inference techniques for carrying out commonsense knowledge. This book provides a survey of the research on commonsense knowledge.Organized into 10 chapters, this book begins with an overview of the basic ideas on artificial intelligence commonsense reasoning. This text then examines the structure of logic, which is roughly analogous to that of a programming language. Other chapters describe how rules of universal validity can be applied to facts known with absolute certainty to deduce ot
Between Representation and Eternity
Atzbach, Rainer
2016-01-01
This paper seeks to explore how prayer and praying practice are reflected in archaeological sources. Apart from objects directly involved in the personal act of praying, such as rosaries and praying books, churches and religious foundations played a major role in the medieval system of intercession....... At death, an indi- vidual’s corpse and burial primarily reflect the social act of representation during the funeral. The position of the arms, which have incorrectly been used as a chronological tool in Scandinavia, may indicate an evolution from a more collective act of prayer up to the eleventh century...
Multiscale wavelet representations for mammographic feature analysis
Laine, Andrew F.; Song, Shuwu
1992-12-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
Taylor, Roger S.; Grundstrom, Erika D.
2011-01-01
Given that astronomy heavily relies on visual representations it is especially likely for individuals to assume that instructional materials, such as visual representations of the Earth-Moon system (EMS), would be relatively accurate. However, in our research, we found that images in middle-school textbooks and educational webpages were commonly…
Accurate determination of antenna directivity
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...
Social Representations of Intelligence
Elena Zubieta
2016-02-01
Full Text Available The article stresses the relationship between Explicit and Implicit theories of Intelligence. Following the line of common sense epistemology and the theory of Social Representations, a study was carried out in order to analyze naive’s explanations about Intelligence Definitions. Based on Mugny & Carugati (1989 research, a self-administered questionnaire was designed and filled in by 286 subjects. Results are congruent with the main hyphotesis postulated: A general overlap between explicit and implicit theories showed up. According to the results Intelligence appears as both, a social attribute related to social adaptation and as a concept defined in relation with contextual variables similar to expert’s current discourses. Nevertheless, conceptions based on “gifted ideology” still are present stressing the main axes of Intelligence debate: biological and sociological determinism. In the same sense, unfamiliarity and social identity are reaffirmed as organizing principles of social representation. The distance with the object -measured as the belief in intelligence differences as a solve/non solve problem- and the level of implication with the topic -teachers/no teachers- appear as discriminating elements at the moment of supporting specific dimensions.
Sakti, Apurba; Gallagher, Kevin G.; Sepulveda, Nestor; Uckun, Canan; Vergara, Claudio; de Sisternes, Fernando J.; Dees, Dennis W.; Botterud, Audun
2017-02-01
We develop three novel enhanced mixed integer-linear representations of the power limit of the battery and its efficiency as a function of the charge and discharge power and the state of charge of the battery, which can be directly implemented in large-scale power systems models and solved with commercial optimization solvers. Using these battery representations, we conduct a techno-economic analysis of the performance of a 10 MWh lithium-ion battery system testing the effect of a 5-min vs. a 60-min price signal on profits using real time prices from a selected node in the MISO electricity market. Results show that models of lithium-ion batteries where the power limits and efficiency are held constant overestimate profits by 10% compared to those obtained from an enhanced representation that more closely matches the real behavior of the battery. When the battery system is exposed to a 5-min price signal, the energy arbitrage profitability improves by 60% compared to that from hourly price exposure. These results indicate that a more accurate representation of li-ion batteries as well as the market rules that govern the frequency of electricity prices can play a major role on the estimation of the value of battery technologies for power grid applications.
Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients, Phase I
National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...
An accurate nonlinear Monte Carlo collision operator
Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.
1995-03-01
A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)
Accurate Modeling Method for Cu Interconnect
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
Accurate control testing for clay liner permeability
Mitchell, R J
1991-08-01
Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.
Representation of an open repository in groundwater flow models
Painter, Scott; Sun, Alexander
2005-08-01
The effect of repository tunnels on groundwater flow has been identified as a potential issue for the nuclear waste repository being considered by SKB for a fractured granite formation in Sweden. In particular, the following pre-closure and post-closure processes have been identified as being important: inflows into open tunnels as functions of estimated grouting efficiencies, drawdown of the water table in the vicinity of the repository, upcoming of saline water, 'turnover' of surface water in the upper bedrock, and resaturation of backfilled tunnels following repository closure. The representation of repository tunnels within groundwater models is addressed in this report. The primary focus is on far-field flow that is modeled with a continuum porous medium approximation. Of particular interest are the consequences of the tunnel representation on the transient response of the groundwater system to repository operations and repository closure, as well as modeling issues such as how the water-table free surface and the coupling to near-surface hydrogeology should be handled. The overall objectives are to understand the consequences of current representations and to identify appropriate approximations for representing open tunnels in future groundwater modeling studies. The following conclusions can be drawn from the results of the simulations: 1. Two-phase flow may be induced in the vicinity of repository tunnels during repository pre-closure operations, but the formation of a two-phase flow region will not significantly affect far-field flow or inflows into tunnels. 2. The water table will be drawn down to the repository horizon and tunnel inflows will reach a steady-state value within about 5 years. 3. Steady-state inflows at the repository edge are estimated to be about 250 m 3 /year per meter of tunnel. Inflows will be greater during the transient de-watering period and less for tunnel locations closer to the repository center. 4. Significant amounts of water
Representation of an open repository in groundwater flow models
Painter, Scott; Sun, Alexander [Southwest Research Inst., San Antonio, TX (United States). Center for Nuclear Waste Regulatory Analyses
2005-08-01
The effect of repository tunnels on groundwater flow has been identified as a potential issue for the nuclear waste repository being considered by SKB for a fractured granite formation in Sweden. In particular, the following pre-closure and post-closure processes have been identified as being important: inflows into open tunnels as functions of estimated grouting efficiencies, drawdown of the water table in the vicinity of the repository, upcoming of saline water, 'turnover' of surface water in the upper bedrock, and resaturation of backfilled tunnels following repository closure. The representation of repository tunnels within groundwater models is addressed in this report. The primary focus is on far-field flow that is modeled with a continuum porous medium approximation. Of particular interest are the consequences of the tunnel representation on the transient response of the groundwater system to repository operations and repository closure, as well as modeling issues such as how the water-table free surface and the coupling to near-surface hydrogeology should be handled. The overall objectives are to understand the consequences of current representations and to identify appropriate approximations for representing open tunnels in future groundwater modeling studies. The following conclusions can be drawn from the results of the simulations: 1. Two-phase flow may be induced in the vicinity of repository tunnels during repository pre-closure operations, but the formation of a two-phase flow region will not significantly affect far-field flow or inflows into tunnels. 2. The water table will be drawn down to the repository horizon and tunnel inflows will reach a steady-state value within about 5 years. 3. Steady-state inflows at the repository edge are estimated to be about 250 m{sup 3}/year per meter of tunnel. Inflows will be greater during the transient de-watering period and less for tunnel locations closer to the repository center. 4. Significant
Accurate Modeling of Advanced Reflectarrays
Zhou, Min
to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
Parental representations of transsexuals.
Parker, G; Barr, R
1982-06-01
The parental representations of 30 male-to-female transsexuals were rated using a measure of fundamental parental dimensions and shown to be of acceptable validity as a measure both of perceived and actual parental characteristics. Scores on that measure were compared separately against scores returned by matched male and female controls. The transsexuals did not differ from the male controls in their scoring of their mothers but did score their fathers as less caring and more overprotective. These differences were weaker for the comparisons made against the female controls. Item analyses suggested that the greater paternal "overprotection" experienced by transsexuals was due to their fathers being perceived as offering less encouragement to their sons' independence and autonomy. Several interpretations of the findings are considered.
Computer aided surface representation
Barnhill, R.E.
1990-02-19
The central research problem of this project is the effective representation, computation, and display of surfaces interpolating to information in three or more dimensions. If the given information is located on another surface, then the problem is to construct a surface defined on a surface''. Sometimes properties of an already defined surface are desired, which is geometry processing''. Visualization of multivariate surfaces is possible by means of contouring higher dimensional surfaces. These problems and more are discussed below. The broad sweep from constructive mathematics through computational algorithms to computer graphics illustrations is utilized in this research. The breadth and depth of this research activity makes this research project unique.
The representation of neutron polarization
Byrne, J.
1979-01-01
Neutron beam polarization representation is discussed under the headings; transfer matrices, coherent parity violation for neutrons, neutron spin rotation in helical magnetic fields, polarization and interference. (UK)
Sinusoidal Representation of Acoustic Signals
Honda, Masaaki
Sinusoidal representation of acoustic signals has been an important tool in speech and music processing like signal analysis, synthesis and time scale or pitch modifications. It can be applicable to arbitrary signals, which is an important advantage over other signal representations like physical modeling of acoustic signals. In sinusoidal representation, acoustic signals are composed as sums of sinusoid (sine wave) with different amplitudes, frequencies and phases, which is based on the timedependent short-time Fourier transform (STFT). This article describes the principles of acoustic signal analysis/synthesis based on a sinusoid representation with focus on sine waves with rapidly varying frequency.
Accurate thickness measurement of graphene
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-01-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)
Chen, Minjie
2009-01-01
The sheer amount of American children's and young adult literature, boasting an outpouring of 5,000 titles every year, often amazes a person who is new to this field. Not only is a large proportion of these books of high printing and binding quality, but, at a quick glance, among them is also a pleasant diversity of genre, format, targeted age…
Cognitive, perceptual and action-oriented representations of falling objects.
Zago, Myrka; Lacquaniti, Francesco
2005-01-01
We interact daily with moving objects. How accurate are our predictions about objects' motions? What sources of information do we use? These questions have received wide attention from a variety of different viewpoints. On one end of the spectrum are the ecological approaches assuming that all the information about the visual environment is present in the optic array, with no need to postulate conscious or unconscious representations. On the other end of the spectrum are the constructivist approaches assuming that a more or less accurate representation of the external world is built in the brain using explicit or implicit knowledge or memory besides sensory inputs. Representations can be related to naive physics or to context cue-heuristics or to the construction of internal copies of environmental invariants. We address the issue of prediction of objects' fall at different levels. Cognitive understanding and perceptual judgment of simple Newtonian dynamics can be surprisingly inaccurate. By contrast, motor interactions with falling objects are often very accurate. We argue that the pragmatic action-oriented behaviour and the perception-oriented behaviour may use different modes of operation and different levels of representation.
Congruence properties of induced representations
Mayer, Dieter; Momeni, Arash; Venkov, Alexei
In this paper we study representations of the projective modular group induced from the Hecke congruence group of level 4 with Selberg's character. We show that the well known congruence properties of Selberg's character are equivalent to the congruence properties of the induced representations...
Factorial representations of path groups
Albeverio, S.; Hoegh-Krohn, R.; Testard, D.; Vershik, A.
1983-11-01
We give the reduction of the energy representation of the group of mappings from I = [ 0,1 ], S 1 , IRsub(+) or IR into a compact semi simple Lie group G. For G = SU(2) we prove the factoriality of the representation, which is of type III in the case I = IR
Using Integer Manipulatives: Representational Determinism
Bossé, Michael J.; Lynch-Davis, Kathleen; Adu-Gyamfi, Kwaku; Chandler, Kayla
2016-01-01
Teachers and students commonly use various concrete representations during mathematical instruction. These representations can be utilized to help students understand mathematical concepts and processes, increase flexibility of thinking, facilitate problem solving, and reduce anxiety while doing mathematics. Unfortunately, the manner in which some…
Knowledge Representation: A Brief Review.
Vickery, B. C.
1986-01-01
Reviews different structures and techniques of knowledge representation: structure of database records and files, data structures in computer programming, syntatic and semantic structure of natural language, knowledge representation in artificial intelligence, and models of human memory. A prototype expert system that makes use of some of these…
International agreements on commercial representation
Slanař, Jan
2014-01-01
The purpose of the thesis is to describe the possibilities for fixing the position of a company in the market through contracts for commercial representation with a focus to finding legal and economic impact on the company that contracted for exclusive representation.
Scientific Representation and Science Learning
Matta, Corrado
2014-01-01
In this article I examine three examples of philosophical theories of scientific representation with the aim of assessing which of these is a good candidate for a philosophical theory of scientific representation in science learning. The three candidate theories are Giere's intentional approach, Suárez's inferential approach and Lynch and…
A generalized wavelet extrema representation
Lu, Jian; Lades, M.
1995-10-01
The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.
Multiple representations in physics education
Duit, Reinders; Fischer, Hans E
2017-01-01
This volume is important because despite various external representations, such as analogies, metaphors, and visualizations being commonly used by physics teachers, educators and researchers, the notion of using the pedagogical functions of multiple representations to support teaching and learning is still a gap in physics education. The research presented in the three sections of the book is introduced by descriptions of various psychological theories that are applied in different ways for designing physics teaching and learning in classroom settings. The following chapters of the book illustrate teaching and learning with respect to applying specific physics multiple representations in different levels of the education system and in different physics topics using analogies and models, different modes, and in reasoning and representational competence. When multiple representations are used in physics for teaching, the expectation is that they should be successful. To ensure this is the case, the implementati...
Laser guided automated calibrating system for accurate bracket ...
It is widely recognized that accurate bracket placement is of critical importance in the efficient application of biomechanics and in realizing the full potential of a preadjusted edgewise appliance. Aim: The purpose of ... placement. Keywords: Hough transforms, Indirect bonding technique, Laser, Orthodontic bracket placement ...
Accurate conjugate gradient methods for families of shifted systems
Eshof, J. van den; Sleijpen, G.L.G.
We present an efficient and accurate variant of the conjugate gradient method for solving families of shifted systems. In particular we are interested in shifted systems that occur in Tikhonov regularization for inverse problems since these problems can be sensitive to roundoff errors. The
Islam and Media Representations
Mohamed Bensalah
2006-04-01
Full Text Available For the author of this article, the media’s treatment of Islam has raised numerous polymorphous questions and debates. Reactivated by the great scares of current events, the issue, though an ancient one, calls many things into question. By way of introduction, the author tries to analyse the complex processes of elaboration and perception of the representations that have prevailed during the past century. In referring to the semantic decoding of the abundant colonial literature and iconography, the author strives to translate the extreme xenophobic tensions and the identity crystallisations associated with the current media orchestration of Islam, both in theWest and the East. He then evokes the excesses of the media that are found at the origin of many amalgams wisely maintained between Islam, Islamism and Islamic terrorism, underscoring their duplicity and their willingness to put themselves, consciously, in service to deceivers and directors of awareness, who are very active at the heart of the politico-media sphere. After levelling a severe accusation against the harmful drifts of the media, especially in times of crisis and war, the author concludes by asserting that these tools of communication, once they are freed of their masks and invective apparatuses, can be re-appropriated by new words and bya true communication between peoples and cultures.
Chemical thermodynamic representation of
Lindemer, T.B.; Besmann, T.M.
1984-01-01
The entire data base for the dependence of the nonstoichiometry, x, on temperature and chemical potential of oxygen (oxygen potential) was retrieved from the literature and represented. This data base was interpreted by least-squares analysis using equations derived from the classical thermodynamic theory for the solid solution of a solute in a solvent. For hyperstoichiometric oxide at oxygen potentials more positive than -266700 + 16.5T kJ/mol, the data were best represented by a [UO 2 ]-[U 3 O 7 ] solution. For O/U ratios above 2 and oxygen potentials below this boundary, a [UO 2 ]-[U 2 O 4 . 5 ] solution represented the data. The data were represented by a [UO 2 ]-[U 1 / 3 ] solution. The resulting equations represent the experimental ln(PO 2 ) - ln(x) behavior and can be used in thermodynamic calculations to predict phase boundary compositions consistent with the literature. Collectively, the present analysis permits a mathematical representation of the behavior of the total data base
Model's sparse representation based on reduced mixed GMsFE basis methods
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn [Institute of Mathematics, Hunan University, Changsha 410082 (China); Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn [College of Mathematics and Econometrics, Hunan University, Changsha 410082 (China)
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in
A structured representation for parallel algorithm design on multicomputers
Sun, Xian-He; Ni, L.M.
1991-01-01
Traditionally, parallel algorithms have been designed by brute force methods and fine-tuned on each architecture to achieve high performance. Rather than studying the design case by case, a systematic approach is proposed. A notation is first developed. Using this notation, most of the frequently used scientific and engineering applications can be presented by simple formulas. The formulas constitute the structured representation of the corresponding applications. The structured representation is simple, adequate and easy to understand. They also contain sufficient information about uneven allocation and communication latency degradations. With the structured representation, applications can be compared, classified and partitioned. Some of the basic building blocks, called computation models, of frequently used applications are identified and studied. Most applications are combinations of some computation models. The structured representation relates general applications to computation models. Studying computation models leads to a guideline for efficient parallel algorithm design for general applications. 6 refs., 7 figs
Poincaré Embeddings for Learning Hierarchical Representations
CERN. Geneva
2018-01-01
Abstracts: Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically do not account for this property. In this talk, I will discuss a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincaré embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability. &...
On network representations of antennas inside resonating environments
F. Gronwald
2007-06-01
Full Text Available We discuss network representations of dipole antennas within electromagnetic cavities. It is pointed out that for a given configuration these representations are not unique. For an efficient evaluation a network representation should be chosen such that it involves as few network elements as possible. The field theoretical analogue of this circumstance is the possibility to express electromagnetic cavities' Green's functions by representations which exhibit different convergence properties. An explicit example of a dipole antenna within a rectangular cavity clarifies the corresponding interrelation between network theory and electromagnetic field theory. As an application, current spectra are calculated for the case that the antenna is nonlinearly loaded and subject to a two-tone excitation.
Javed, U.; Abdelkefi, A.
2017-07-01
One of the challenging tasks in the analytical modeling of galloping systems is the representation of the galloping force. In this study, the impacts of using different aerodynamic load representations on the dynamics of galloping oscillations are investigated. A distributed-parameter model is considered to determine the response of a galloping energy harvester subjected to a uniform wind speed. For the same experimental data and conditions, various polynomial expressions for the galloping force are proposed in order to determine the possible differences in the variations of the harvester's outputs as well as the type of instability. For the same experimental data of the galloping force, it is demonstrated that the choice of the coefficients of the polynomial approximation may result in a change in the type of bifurcation, the tip displacement and harvested power amplitudes. A parametric study is then performed to investigate the effects of the electrical load resistance on the harvester's performance when considering different possible representations of the aerodynamic force. It is indicated that for low and high values of the electrical resistance, there is an increase in the range of wind speeds where the response of the energy harvester is not affected. The performed analysis shows the importance of accurately representing the galloping force in order to efficiently design piezoelectric energy harvesters.
A reduced-order representation of the Schrödinger equation
Ming-C. Cheng
2016-09-01
Full Text Available A reduced-order-based representation of the Schrödinger equation is investigated for electron wave functions in semiconductor nanostructures. In this representation, the Schrödinger equation is projected onto an eigenspace described by a small number of basis functions that are generated from the proper orthogonal decomposition (POD. The approach substantially reduces the numerical degrees of freedom (DOF’s needed to numerically solve the Schrödinger equation for the wave functions and eigenstate energies in a quantum structure and offers an accurate solution as detailed as the direct numerical simulation of the Schrödinger equation. To develop such an approach, numerical data accounting for parametric variations of the system are used to perform decomposition in order to generate the POD eigenvalues and eigenvectors for the system. This approach is applied to develop POD models for single and multiple quantum well structure. Errors resulting from the approach are examined in detail associated with the selected numerical DOF’s of the POD model and quality of data used for generation of the POD eigenvalues and basis functions. This study investigates the fundamental concepts of the POD approach to the Schrödinger equation and paves a way toward developing an efficient modeling methodology for large-scale multi-block simulation of quantum nanostructures.
Organizing learning processes on risks by using the bow-tie representation
Chevreau, F.R. [Ecole des Mines de Paris, 06904 Sophia-Antipolis (France)]. E-mail: chevreau@cindy.ensmp.fr; Wybo, J.L. [Ecole des Mines de Paris, 06904 Sophia-Antipolis (France)]. E-mail: wybo@cindy.ensmp.fr; Cauchois, D. [Process Safety Department, Sanofi-Aventis, Site de Production de Vitry sur Seine, 9 Quai Jules Guesdes, 94400 Vitry sur Seine (France)]. E-mail: didier.cauchois@sanofi-aventis.com
2006-03-31
The Aramis method proposes a complete and efficient way to manage risk analysis by using the bow-tie representation. This paper shows how the bow-tie representation can also be appropriate for experience learning. It describes how a pharmaceutical production plant uses bow-ties for incident and accident analysis. Two levels of bow-ties are constructed: standard bow-ties concern generic risks of the plant whereas local bow-ties represent accident scenarios specific to each workplace. When incidents or accidents are analyzed, knowledge that is gained is added to existing local bow-ties. Regularly, local bow-ties that have been updated are compared to standard bow-ties in order to revise them. Knowledge on safety at the global and at local levels is hence as accurate as possible and memorized in a real time framework. As it relies on the communication between safety experts and local operators, this use of the bow-ties contributes therefore to organizational learning for safety.
Organizing learning processes on risks by using the bow-tie representation
Chevreau, F.R.; Wybo, J.L.; Cauchois, D.
2006-01-01
The Aramis method proposes a complete and efficient way to manage risk analysis by using the bow-tie representation. This paper shows how the bow-tie representation can also be appropriate for experience learning. It describes how a pharmaceutical production plant uses bow-ties for incident and accident analysis. Two levels of bow-ties are constructed: standard bow-ties concern generic risks of the plant whereas local bow-ties represent accident scenarios specific to each workplace. When incidents or accidents are analyzed, knowledge that is gained is added to existing local bow-ties. Regularly, local bow-ties that have been updated are compared to standard bow-ties in order to revise them. Knowledge on safety at the global and at local levels is hence as accurate as possible and memorized in a real time framework. As it relies on the communication between safety experts and local operators, this use of the bow-ties contributes therefore to organizational learning for safety
Compact representations for the design of quantum logic
Niemann, Philipp
2017-01-01
This book discusses modern approaches and challenges of computer-aided design (CAD) of quantum circuits with a view to providing compact representations of quantum functionality. Focusing on the issue of quantum functionality, it presents Quantum Multiple-Valued Decision Diagrams (QMDDs – a means of compactly and efficiently representing and manipulating quantum logic. For future quantum computers, going well beyond the size of present-day prototypes, the manual design of quantum circuits that realize a given (quantum) functionality on these devices is no longer an option. In order to keep up with the technological advances, methods need to be provided which, similar to the design and synthesis of conventional circuits, automatically generate a circuit description of the desired functionality. To this end, an efficient representation of the desired quantum functionality is of the essence. While straightforward representations are restricted due to their (exponentially) large matrix descriptions and other de...
Towards a multilevel cognitive probabilistic representation of space
Tapus, Adriana; Vasudevan, Shrihari; Siegwart, Roland
2005-03-01
This paper addresses the problem of perception and representation of space for a mobile agent. A probabilistic hierarchical framework is suggested as a solution to this problem. The method proposed is a combination of probabilistic belief with "Object Graph Models" (OGM). The world is viewed from a topological optic, in terms of objects and relationships between them. The hierarchical representation that we propose permits an efficient and reliable modeling of the information that the mobile agent would perceive from its environment. The integration of both navigational and interactional capabilities through efficient representation is also addressed. Experiments on a set of images taken from the real world that validate the approach are reported. This framework draws on the general understanding of human cognition and perception and contributes towards the overall efforts to build cognitive robot companions.
Number theory via Representation theory
2014-11-09
Number theory via Representation theory. Eknath Ghate. November 9, 2014. Eightieth Annual Meeting, Chennai. Indian Academy of Sciences1. 1. This is a non-technical 20 minute talk intended for a general Academy audience.
(Self)-representations on youtube
Simonsen, Thomas Mosebo
This paper examines forms of self-representation on YouTube with specific focus on Vlogs (Video blogs). The analytical scope of the paper is on how User-generated Content on YouTube initiates a certain kind of audiovisual representation and a particular interpretation of reality that can...... be distinguished within Vlogs. This will be analysed through selected case studies taken from a representative sample of empirically based observations of YouTube videos. The analysis includes a focus on how certain forms of representation can be identified as representations of the self (Turkle 1995, Scannell...... 1996, Walker 2005) and further how these forms must be comprehended within a context of technological constrains, institutional structures and social as well as economical practices on YouTube (Burgess and Green 2009, Van Dijck 2009). It is argued that these different contexts play a vital part...
Semantic Knowledge Representation (SKR) API
U.S. Department of Health & Human Services — The SKR Project was initiated at NLM in order to develop programs to provide usable semantic representation of biomedical free text by building on resources...
Solitons and theory of representations
Kulish, P.P.
1985-01-01
Problems on the theory of group representations finding application in constructing the quantum variant of the inverse scattering problem are discussed. The multicomponent nonlinear Shroedinger equation is considered as a main example of nonlinear evolution equations (NEE)
Computer representation of molecular surfaces
Max, N.L.
1981-01-01
This review article surveys recent work on computer representation of molecular surfaces. Several different algorithms are discussed for producing vector or raster drawings of space-filling models formed as the union of spheres. Other smoother surfaces are also considered
LPS: a rule-based, schema-oriented knowledge representation system
Anzai, Y; Mitsuya, Y; Nakajima, S; Ura, S
1981-01-01
A new knowledge representation system called LPS is presented. The global control structure of LPS is rule-based, but the local representational structure is schema-oriented. The present version of LPS was designed to increase the understandability of representation while keeping time efficiency reasonable. Pattern matching through slot-networks and meta-actions from among the implemented facilities of LPS, are especially described in detail. 7 references.
Competition and Cooperation among Relational Memory Representations.
Schwarb, Hillary; Watson, Patrick D; Campbell, Kelsey; Shander, Christopher L; Monti, Jim M; Cooke, Gillian E; Wang, Jane X; Kramer, Arthur F; Cohen, Neal J
2015-01-01
Mnemonic processing engages multiple systems that cooperate and compete to support task performance. Exploring these systems' interaction requires memory tasks that produce rich data with multiple patterns of performance sensitive to different processing sub-components. Here we present a novel context-dependent relational memory paradigm designed to engage multiple learning and memory systems. In this task, participants learned unique face-room associations in two distinct contexts (i.e., different colored buildings). Faces occupied rooms as determined by an implicit gender-by-side rule structure (e.g., male faces on the left and female faces on the right) and all faces were seen in both contexts. In two experiments, we use behavioral and eye-tracking measures to investigate interactions among different memory representations in both younger and older adult populations; furthermore we link these representations to volumetric variations in hippocampus and ventromedial PFC among older adults. Overall, performance was very accurate. Successful face placement into a studied room systematically varied with hippocampal volume. Selecting the studied room in the wrong context was the most typical error. The proportion of these errors to correct responses positively correlated with ventromedial prefrontal volume. This novel task provides a powerful tool for investigating both the unique and interacting contributions of these systems in support of relational memory.
Paired structures in knowledge representation
Montero, J.; Bustince, H.; Franco de los Ríos, Camilo
2016-01-01
In this position paper we propose a consistent and unifying view to all those basic knowledge representation models that are based on the existence of two somehow opposite fuzzy concepts. A number of these basic models can be found in fuzzy logic and multi-valued logic literature. Here...... of the relationships between several existing knowledge representation formalisms, providing a basis from which more expressive models can be later developed....
Functional representations of integrable hierarchies
Dimakis, Aristophanes; Mueller-Hoissen, Folkert
2006-01-01
We consider a general framework for integrable hierarchies in Lax form and derive certain universal equations from which 'functional representations' of particular hierarchies (such as KP, discrete KP, mKP, AKNS), i.e. formulations in terms of functional equations, are systematically and quite easily obtained. The formalism genuinely applies to hierarchies where the dependent variables live in a noncommutative (typically matrix) algebra. The obtained functional representations can be understood as 'noncommutative' analogues of 'Fay identities' for the KP hierarchy
From the Osterwalder canvas to an alternative business model representation
Verrue, Johan
2015-01-01
The Osterwalder business model canvas (BMC) is used by many entrepreneurs, managers, consultants and business schools. In our research we have investigated whether the canvas is a valid instrument for gaining an in-depth, accurate insight into business models. Therefore we have performed initial multiple case study research which concluded that the canvas does not generate valid business model (BM) representations. In our second multiple case study, we have constructed an alternative BM frame...
Can Measured Synergy Excitations Accurately Construct Unmeasured Muscle Excitations?
Bianco, Nicholas A; Patten, Carolynn; Fregly, Benjamin J
2018-01-01
Accurate prediction of muscle and joint contact forces during human movement could improve treatment planning for disorders such as osteoarthritis, stroke, Parkinson's disease, and cerebral palsy. Recent studies suggest that muscle synergies, a low-dimensional representation of a large set of muscle electromyographic (EMG) signals (henceforth called "muscle excitations"), may reduce the redundancy of muscle excitation solutions predicted by optimization methods. This study explores the feasibility of using muscle synergy information extracted from eight muscle EMG signals (henceforth called "included" muscle excitations) to accurately construct muscle excitations from up to 16 additional EMG signals (henceforth called "excluded" muscle excitations). Using treadmill walking data collected at multiple speeds from two subjects (one healthy, one poststroke), we performed muscle synergy analysis on all possible subsets of eight included muscle excitations and evaluated how well the calculated time-varying synergy excitations could construct the remaining excluded muscle excitations (henceforth called "synergy extrapolation"). We found that some, but not all, eight-muscle subsets yielded synergy excitations that achieved >90% extrapolation variance accounted for (VAF). Using the top 10% of subsets, we developed muscle selection heuristics to identify included muscle combinations whose synergy excitations achieved high extrapolation accuracy. For 3, 4, and 5 synergies, these heuristics yielded extrapolation VAF values approximately 5% lower than corresponding reconstruction VAF values for each associated eight-muscle subset. These results suggest that synergy excitations obtained from experimentally measured muscle excitations can accurately construct unmeasured muscle excitations, which could help limit muscle excitations predicted by muscle force optimizations.
Implicit time accurate simulation of unsteady flow
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
Integral representation in the hodograph plane of compressible flow
Hansen, Erik Bent; Hsiao, G.C.
2003-01-01
Compressible flow is considered in the hodograph plane. The linearity of the equation determining the stream function is exploited to derive a representation formula involving boundary data only, and a fundamental solution to the adjoint equation. For subsonic flow, an efficient algorithm...
Rodriguez, Jesse M.; Batzoglou, Serafim; Bercovici, Sivan
2013-01-01
, accurate and efficient detection of hidden relatedness becomes a challenge. To enable disease-mapping studies of increasingly large cohorts, a fast and accurate method to detect IBD segments is required. We present PARENTE, a novel method for detecting
Evolved Representation and Computational Creativity
Ashraf Fouad Hafez Ismail
2001-01-01
Full Text Available Advances in science and technology have influenced designing activity in architecture throughout its history. Observing the fundamental changes to architectural designing due to the substantial influences of the advent of the computing era, we now witness our design environment gradually changing from conventional pencil and paper to digital multi-media. Although designing is considered to be a unique human activity, there has always been a great dependency on design aid tools. One of the greatest aids to architectural design, amongst the many conventional and widely accepted computational tools, is the computer-aided object modeling and rendering tool, commonly known as a CAD package. But even though conventional modeling tools have provided designers with fast and precise object handling capabilities that were not available in the pencil-and-paper age, they normally show weaknesses and limitations in covering the whole design process.In any kind of design activity, the design worked on has to be represented in some way. For a human designer, designs are for example represented using models, drawings, or verbal descriptions. If a computer is used for design work, designs are usually represented by groups of pixels (paintbrush programs, lines and shapes (general-purpose CAD programs or higher-level objects like ‘walls’ and ‘rooms’ (purpose-specific CAD programs.A human designer usually has a large number of representations available, and can use the representation most suitable for what he or she is working on. Humans can also introduce new representations and thereby represent objects that are not part of the world they experience with their sensory organs, for example vector representations of four and five dimensional objects. In design computing on the other hand, the representation or representations used have to be explicitly defined. Many different representations have been suggested, often optimized for specific design domains
Towards a more efficient representation of imputation operators in TPOT
Garciarena, Unai; Mendiburu, Alexander; Santana, Roberto
2018-01-01
Automated Machine Learning encompasses a set of meta-algorithms intended to design and apply machine learning techniques (e.g., model selection, hyperparameter tuning, model assessment, etc.). TPOT, a software for optimizing machine learning pipelines based on genetic programming (GP), is a novel example of this kind of applications. Recently we have proposed a way to introduce imputation methods as part of TPOT. While our approach was able to deal with problems with missing data, it can prod...
On Behavioral Equivalence of Rational Representations
Trentelman, Harry L.; Willems, JC; Hara, S; Ohta, Y; Fujioka, H
2010-01-01
This article deals with the equivalence of representations of behaviors of linear differential systems In general. the behavior of a given linear differential system has many different representations. In this paper we restrict ourselves to kernel representations and image representations Two kernel
On Representation in Information Theory
Joseph E. Brenner
2011-09-01
Full Text Available Semiotics is widely applied in theories of information. Following the original triadic characterization of reality by Peirce, the linguistic processes involved in information—production, transmission, reception, and understanding—would all appear to be interpretable in terms of signs and their relations to their objects. Perhaps the most important of these relations is that of the representation-one, entity, standing for or representing some other. For example, an index—one of the three major kinds of signs—is said to represent something by being directly related to its object. My position, however, is that the concept of symbolic representations having such roles in information, as intermediaries, is fraught with the same difficulties as in representational theories of mind. I have proposed an extension of logic to complex real phenomena, including mind and information (Logic in Reality; LIR, most recently at the 4th International Conference on the Foundations of Information Science (Beijing, August, 2010. LIR provides explanations for the evolution of complex processes, including information, that do not require any entities other than the processes themselves. In this paper, I discuss the limitations of the standard relation of representation. I argue that more realistic pictures of informational systems can be provided by reference to information as an energetic process, following the categorial ontology of LIR. This approach enables naïve, anti-realist conceptions of anti-representationalism to be avoided, and enables an approach to both information and meaning in the same novel logical framework.
Social representations of female orgasm.
Lavie-Ajayi, Maya; Joffe, Hélène
2009-01-01
This study examines women's social representations of female orgasm. Fifty semi-structured interviews were conducted with British women. The data were thematically analysed and compared with the content of female orgasm-related writing in two women's magazines over a 30-year period. The results indicate that orgasm is deemed the goal of sex with emphasis on its physiological dimension. However, the women and the magazines graft onto this scientifically driven representation the importance of relational and emotive aspects of orgasm. For the women, particularly those who experience themselves as having problems with orgasm, the scientifically driven representations induce feelings of failure, but are also resisted. The findings highlight the role played by the social context in women's subjective experience of their sexual health.
An introduction to quiver representations
Derksen, Harm
2017-01-01
This book is an introduction to the representation theory of quivers and finite dimensional algebras. It gives a thorough and modern treatment of the algebraic approach based on Auslander-Reiten theory as well as the approach based on geometric invariant theory. The material in the opening chapters is developed starting slowly with topics such as homological algebra, Morita equivalence, and Gabriel's theorem. Next, the book presents Auslander-Reiten theory, including almost split sequences and the Auslander-Reiten transform, and gives a proof of Kac's generalization of Gabriel's theorem. Once this basic material is established, the book goes on with developing the geometric invariant theory of quiver representations. The book features the exposition of the saturation theorem for semi-invariants of quiver representations and its application to Littlewood-Richardson coefficients. In the final chapters, the book exposes tilting modules, exceptional sequences and a connection to cluster categories. The book is su...
Preon representations and composite models
Kang, Kyungsik
1982-01-01
This is a brief report on the preon models which are investigated by In-Gyu Koh, A. N. Schellekens and myself and based on complex, anomaly-free and asymptotically free representations of SU(3) to SU(8), SO(4N+2) and E 6 with no more than two different preons. Complete list of the representations that are complex anomaly-free and asymptotically free has been given by E. Eichten, I.-G. Koh and myself. The assumptions made about the ground state composites and the role of Fermi statistics to determine the metaflavor wave functions are discussed in some detail. We explain the method of decompositions of tensor products with definite permutation properties which has been developed for this purpose by I.-G. Koh, A.N. Schellekens and myself. An example based on an anomaly-free representation of the confining metacolor group SU(5) is discussed
Representational constraints on children's suggestibility.
Ceci, Stephen J; Papierno, Paul B; Kulkofsky, Sarah
2007-06-01
In a multistage experiment, twelve 4- and 9-year-old children participated in a triad rating task. Their ratings were mapped with multidimensional scaling, from which euclidean distances were computed to operationalize semantic distance between items in target pairs. These children and age-mates then participated in an experiment that employed these target pairs in a story, which was followed by a misinformation manipulation. Analyses linked individual and developmental differences in suggestibility to children's representations of the target items. Semantic proximity was a strong predictor of differences in suggestibility: The closer a suggested distractor was to the original item's representation, the greater was the distractor's suggestive influence. The triad participants' semantic proximity subsequently served as the basis for correctly predicting memory performance in the larger group. Semantic proximity enabled a priori counterintuitive predictions of reverse age-related trends to be confirmed whenever the distance between representations of items in a target pair was greater for younger than for older children.
Digital models for architectonical representation
Stefano Brusaporci
2011-12-01
Full Text Available Digital instruments and technologies enrich architectonical representation and communication opportunities. Computer graphics is organized according the two phases of visualization and construction, that is modeling and rendering, structuring dichotomy of software technologies. Visualization modalities give different kinds of representations of the same 3D model and instruments produce a separation between drawing and image’s creation. Reverse modeling can be related to a synthesis process, ‘direct modeling’ follows an analytic procedure. The difference between interactive and not interactive applications is connected to the possibilities offered by informatics instruments, and relates to modeling and rendering. At the same time the word ‘model’ describes different phenomenon (i.e. files: mathematical model of the building and of the scene; raster representation and post-processing model. All these correlated different models constitute the architectonical interpretative model, that is a simulation of reality made by the model for improving the knowledge.
Asymptotical representation of discrete groups
Mishchenko, A.S.; Mohammad, N.
1995-08-01
If one has a unitary representation ρ: π → U(H) of the fundamental group π 1 (M) of the manifold M then one can do may useful things: 1. To construct a natural vector bundle over M; 2. To construct the cohomology groups with respect to the local system of coefficients; 3. To construct the signature of manifold M with respect to the local system of coefficients; and others. In particular, one can write the Hirzebruch formula which compares the signature with the characteristic classes of the manifold M, further based on this, find the homotopy invariant characteristic classes (i.e. the Novikov conjecture). Taking into account that the family of known representations is not sufficiently large, it would be interesting to extend this family to some larger one. Using the ideas of A.Connes, M.Gromov and H.Moscovici a proper notion of asymptotical representation is defined. (author). 7 refs
Vivid Representations and Their Effects
Kengo Miyazono
2018-04-01
Full Text Available Sinhababu’s Humean Nature contains many interesting and important ideas, but in this short commentary I focus on the idea of vivid representations. Sinhababu inherits his idea of vivid representations from Hume’s discussions, in particular his discussion of calm and violent passions. I am sympathetic to the idea of developing Hume’s insight that has been largely neglected by philosophers. I believe that Sinhababu and Hume are on the right track. What I do in this short commentary is to raise some questions about the details. The aim of asking these questions is not to challenge Sinhababu’s proposal (at least his main ideas, but rather to point at some interesting issues arising out of his proposal. The questions are about (1 the nature of vividness, (2 the effects of vivid representations, and (3 Sinhababu’s account of alief cases.
Multigrid time-accurate integration of Navier-Stokes equations
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1993-01-01
Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.
(Self)-representations on youtube
Simonsen, Thomas Mosebo
2011-01-01
This paper examines forms of self-representation on YouTube with specific focus on Vlogs (Video blogs). The analytical scope of the paper is on how User-generated Content on YouTube initiates a certain kind of audiovisual representation and a particular interpretation of reality that can be distinguished within Vlogs. This will be analysed through selected case studies taken from a representative sample of empirically based observations of YouTube videos. The analysis includes a focus on how ...
Concepts, ontologies, and knowledge representation
Jakus, Grega; Omerovic, Sanida; Tomažic, Sašo
2013-01-01
Recording knowledge in a common framework that would make it possible to seamlessly share global knowledge remains an important challenge for researchers. This brief examines several ideas about the representation of knowledge addressing this challenge. A widespread general agreement is followed that states uniform knowledge representation should be achievable by using ontologies populated with concepts. A separate chapter is dedicated to each of the three introduced topics, following a uniform outline: definition, organization, and use. This brief is intended for those who want to get to know
Thinking together with material representations
Stege Bjørndahl, Johanne; Fusaroli, Riccardo; Østergaard, Svend
2014-01-01
of an experiment. Qualitative micro-analyses of the group interactions motivate a taxonomy of different roles that the material representations play in the joint epistemic processes: illustration, elaboration and exploration. Firstly, the LEGO blocks were used to illustrate already well-formed ideas in support......-down and bottom-up cognitive processes and division of cognitive labor.......How do material representations such as models, diagrams and drawings come to shape and aid collective, epistemic processes? This study investigated how groups of participants spontaneously recruited material objects (in this case LEGO blocks) to support collective creative processes in the context...
Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping
2016-09-01
Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.
Large margin image set representation and classification
Wang, Jim Jing-Yan
2014-07-06
In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.
Large margin image set representation and classification
Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin
2014-01-01
In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.
Towards Web-based representation and processing of health information
Gao, S.; Mioc, Darka; Yi, X.L.
2009-01-01
facilitated the online processing, mapping and sharing of health information, with the use of HERXML and Open Geospatial Consortium (OGC) services. It brought a new solution in better health data representation and initial exploration of the Web-based processing of health information. Conclusion: The designed......Background: There is great concern within health surveillance, on how to grapple with environmental degradation, rapid urbanization, population mobility and growth. The Internet has emerged as an efficient way to share health information, enabling users to access and understand data....... For the representation of health information through Web-mapping applications, there still lacks a standard format to accommodate all fixed (such as location) and variable (such as age, gender, health outcome, etc) indicators in the representation of health information. Furthermore, net-centric computing has not been...
Qualitative Knowledge Representations for Intelligent Nuclear Power Plants
Cha, Kyoungho; Huh, Young H.
1993-01-01
Qualitative Physics(QP) has systematically been approached to qualitative modeling of physical systems for recent two decades. Designing intelligent systems for NPP requires an efficient representation of qualitative knowledge about the behavior and structure of NPP or its components. A novel representation of qualitative knowledge also enables intelligent systems to derive meaningful conclusions from incomplete or uncertain knowledge of a plant behavior. We look mainly into representative QP works on nuclear applications and the representation of qualitative knowledge for the diagnostic model, the qualitative simulation of a mental model of NPP operator, and the qualitative interpretation of the measured raw data from NPP. We present the challenging areas for QP applications in nuclear industry. QP technology will make NPP more intelligent
Sampling-free Bayesian inversion with adaptive hierarchical tensor representations
Eigel, Martin; Marschall, Manuel; Schneider, Reinhold
2018-03-01
A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.
Invariant recognition drives neural representations of action sequences.
Andrea Tacchetti
2017-12-01
Full Text Available Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs, that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences.
Accurate modeling and evaluation of microstructures in complex materials
Tahmasebi, Pejman
2018-02-01
Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.
Representations of quantum bicrossproduct algebras
Arratia, Oscar; Olmo, Mariano A del
2002-01-01
We present a method to construct induced representations of quantum algebras which have a bicrossproduct structure. We apply this procedure to some quantum kinematical algebras in (1+1) dimensions with this kind of structure: null-plane quantum Poincare algebra, non-standard quantum Galilei algebra and quantum κ-Galilei algebra
Reusable Lexical Representations for Idioms
Odijk, J.E.J.M.
2004-01-01
In this paper I introduce (1) a technically simple and highly theory-independent way for lexically representing flexible idiomatic expressions, and (2) a procedure to incorporate these lexical representations in a wide variety of NLP systems. The method is based on Structural EQuivalence Classes
Symmetric group representations and Z
Adve, Anshul; Yong, Alexander
2017-01-01
We discuss implications of the following statement about the representation theory of symmetric groups: every integer appears infinitely often as an irreducible character evaluation, and every nonnegative integer appears infinitely often as a Littlewood-Richardson coefficient and as a Kronecker coefficient.
Guideline Knowledge Representation Model (GLIKREM)
Buchtela, David; Peleška, Jan; Veselý, Arnošt; Zvárová, Jana; Zvolský, Miroslav
2008-01-01
Roč. 4, č. 1 (2008), s. 17-23 ISSN 1801-5603 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : knowledge representation * GLIF model * guidelines Subject RIV: IN - Informatics, Computer Science http://www.ejbi.org/articles/200812/34/1.html
Conceptual Knowledge Representation and Reasoning
Oldager, Steen Nikolaj
2003-01-01
One of the main areas in knowledge representation and logic-based artificial intelligence concerns logical formalisms that can be used for representing and reasoning with concepts. For almost 30 years, since research in this area began, the issue of intensionality has had a special status...
Octonionic matrix representation and electromagnetism
Chanyal, B. C. [Kumaun University, S. S. J. Campus, Almora (India)
2014-12-15
Keeping in mind the important role of octonion algebra, we have obtained the electromagnetic field equations of dyons with an octonionic 8 x 8 matrix representation. In this paper, we consider the eight - dimensional octonionic space as a combination of two (external and internal) four-dimensional spaces for the existence of magnetic monopoles (dyons) in a higher-dimensional formalism. As such, we describe the octonion wave equations in terms of eight components from the 8 x 8 matrix representation. The octonion forms of the generalized potential, fields and current source of dyons in terms of 8 x 8 matrix are discussed in a consistent manner. Thus, we have obtained the generalized Dirac-Maxwell equations of dyons from an 8x8 matrix representation of the octonion wave equations in a compact and consistent manner. The generalized Dirac-Maxwell equations are fully symmetric Maxwell equations and allow for the possibility of magnetic charges and currents, analogous to electric charges and currents. Accordingly, we have obtained the octonionic Dirac wave equations in an external field from the matrix representation of the octonion-valued potentials of dyons.
Realizations of the canonical representation
Traditionally, the canonical representation is realized on the Hilbert space ... Fix a decomposition R2n = Rn × Rn ..... to an orthonormal basis {ψ1,ψ2,. ..... [7] Vemuri M K, A non-commutative Sobolev inequality and its application to spectral.
Non-Hermitian Heisenberg representation
Znojil, Miloslav
2015-01-01
Roč. 379, č. 36 (2015), s. 2013-2017 ISSN 0375-9601 Institutional support: RVO:61389005 Keywords : quantum mechanics * Non-Hermitian representation of observables * Generalized Heisenberg equations Subject RIV: BE - Theoretical Physics Impact factor: 1.677, year: 2015
Adaptive representations for reinforcement learning
Whiteson, S.
2010-01-01
This book presents new algorithms for reinforcement learning, a form of machine learning in which an autonomous agent seeks a control policy for a sequential decision task. Since current methods typically rely on manually designed solution representations, agents that automatically adapt their own
Visual representation of spatiotemporal structure
Schill, Kerstin; Zetzsche, Christoph; Brauer, Wilfried; Eisenkolb, A.; Musto, A.
1998-07-01
The processing and representation of motion information is addressed from an integrated perspective comprising low- level signal processing properties as well as higher-level cognitive aspects. For the low-level processing of motion information we argue that a fundamental requirement is the existence of a spatio-temporal memory. Its key feature, the provision of an orthogonal relation between external time and its internal representation, is achieved by a mapping of temporal structure into a locally distributed activity distribution accessible in parallel by higher-level processing stages. This leads to a reinterpretation of the classical concept of `iconic memory' and resolves inconsistencies on ultra-short-time processing and visual masking. The spatial-temporal memory is further investigated by experiments on the perception of spatio-temporal patterns. Results on the direction discrimination of motion paths provide evidence that information about direction and location are not processed and represented independent of each other. This suggests a unified representation on an early level, in the sense that motion information is internally available in form of a spatio-temporal compound. For the higher-level representation we have developed a formal framework for the qualitative description of courses of motion that may occur with moving objects.
Representational Momentum in Older Adults
Piotrowski, Andrea S.; Jakobson, Lorna S.
2011-01-01
Humans have a tendency to perceive motion even in static images that simply "imply" movement. This tendency is so strong that our memory for actions depicted in static images is distorted in the direction of implied motion--a phenomenon known as representational momentum (RM). In the present study, we created an RM display depicting a pattern of…
The representation of inherent properties.
Prasada, Sandeep
2014-10-01
Research on the representation of generic knowledge suggests that inherent properties can have either a principled or a causal connection to a kind. The type of connection determines whether the outcome of the storytelling process will include intuitions of inevitability and a normative dimension and whether it will ground causal explanations.
Representation and redistribution in federations.
Dragu, Tiberiu; Rodden, Jonathan
2011-05-24
Many of the world's most populous democracies are political unions composed of states or provinces that are unequally represented in the national legislature. Scattered empirical studies, most of them focusing on the United States, have discovered that overrepresented states appear to receive larger shares of the national budget. Although this relationship is typically attributed to bargaining advantages associated with greater legislative representation, an important threat to empirical identification stems from the fact that the representation scheme was chosen by the provinces. Thus, it is possible that representation and fiscal transfers are both determined by other characteristics of the provinces in a specific country. To obtain an improved estimate of the relationship between representation and redistribution, we collect and analyze provincial-level data from nine federations over several decades, taking advantage of the historical process through which federations formed and expanded. Controlling for a variety of country- and province-level factors and using a variety of estimation techniques, we show that overrepresented provinces in political unions around the world are rather dramatically favored in the distribution of resources.
Supernovae Discovery Efficiency
John, Colin
2018-01-01
Abstract:We present supernovae (SN) search efficiency measurements for recent Hubble Space Telescope (HST) surveys. Efficiency is a key component to any search, and is important parameter as a correction factor for SN rates. To achieve an accurate value for efficiency, many supernovae need to be discoverable in surveys. This cannot be achieved from real SN only, due to their scarcity, so fake SN are planted. These fake supernovae—with a goal of realism in mind—yield an understanding of efficiency based on position related to other celestial objects, and brightness. To improve realism, we built a more accurate model of supernovae using a point-spread function. The next improvement to realism is planting these objects close to galaxies and of various parameters of brightness, magnitude, local galactic brightness and redshift. Once these are planted, a very accurate SN is visible and discoverable by the searcher. It is very important to find factors that affect this discovery efficiency. Exploring the factors that effect detection yields a more accurate correction factor. Further inquires into efficiency give us a better understanding of image processing, searching techniques and survey strategies, and result in an overall higher likelihood to find these events in future surveys with Hubble, James Webb, and WFIRST telescopes. After efficiency is discovered and refined with many unique surveys, it factors into measurements of SN rates versus redshift. By comparing SN rates vs redshift against the star formation rate we can test models to determine how long star systems take from the point of inception to explosion (delay time distribution). This delay time distribution is compared to SN progenitors models to get an accurate idea of what these stars were like before their deaths.
Students' Development of Representational Competence Through the Sense of Touch
Magana, Alejandra J.; Balachandran, Sadhana
2017-06-01
Electromagnetism is an umbrella encapsulating several different concepts like electric current, electric fields and forces, and magnetic fields and forces, among other topics. However, a number of studies in the past have highlighted the poor conceptual understanding of electromagnetism concepts by students even after instruction. This study aims to identify novel forms of "hands-on" instruction that can result in representational competence and conceptual gain. Specifically, this study aimed to identify if the use of visuohaptic simulations can have an effect on student representations of electromagnetic-related concepts. The guiding questions is How do visuohaptic simulations influence undergraduate students' representations of electric forces? Participants included nine undergraduate students from science, technology, or engineering backgrounds who participated in a think-aloud procedure while interacting with a visuohaptic simulation. The think-aloud procedure was divided in three stages, a prediction stage, a minimally visual haptic stage, and a visually enhanced haptic stage. The results of this study suggest that students' accurately characterized and represented the forces felt around a particle, line, and ring charges either in the prediction stage, a minimally visual haptic stage or the visually enhanced haptic stage. Also, some students accurately depicted the three-dimensional nature of the field for each configuration in the two stages that included a tactile mode, where the point charge was the most challenging one.
A new image representation for compact and secure communication
Prasad, Lakshman; Skourikhine, A.N.
2004-01-01
In many areas of nuclear materials management there is a need for communication, archival, and retrieval of annotated image data between heterogeneous platforms and devices to effectively implement safety, security, and safeguards of nuclear materials. Current image formats such as JPEG are not ideally suited in such scenarios as they are not scalable to different viewing formats, and do not provide a high-level representation of images that facilitate automatic object/change detection or annotation. The new Scalable Vector Graphics (SVG) open standard for representing graphical information, recommended by the World Wide Web Consortium (W3C) is designed to address issues of image scalability, portability, and annotation. However, until now there has been no viable technology to efficiently field images of high visual quality under this standard. Recently, LANL has developed a vectorized image representation that is compatible with the SVG standard and preserves visual quality. This is based on a new geometric framework for characterizing complex features in real-world imagery that incorporates perceptual principles of processing visual information known from cognitive psychology and vision science, to obtain a polygonal image representation of high fidelity. This representation can take advantage of all textual compression and encryption routines unavailable to other image formats. Moreover, this vectorized image representation can be exploited to facilitate automated object recognition that can reduce time required for data review. The objects/features of interest in these vectorized images can be annotated via animated graphics to facilitate quick and easy display and comprehension of processed image content.
Advanced Time-Frequency Representation in Voice Signal Analysis
Dariusz Mika
2018-03-01
Full Text Available The most commonly used time-frequency representation of the analysis in voice signal is spectrogram. This representation belongs in general to Cohen's class, the class of time-frequency energy distributions. From the standpoint of properties of the resolution spectrogram representation is not optimal. In Cohen class representations are known which have a better resolution properties. All of them are created by smoothing the Wigner-Ville'a (WVD distribution characterized by the best resolution, however, the biggest harmful interference. Used smoothing functions decide about a compromise between the properties of resolution and eliminating harmful interference term. Another class of time-frequency energy distributions is the affine class of distributions. From the point of view of readability of analysis the best properties are known so called Redistribution of energy caused by the use of a general methodology referred to as reassignment to any time-frequency representation. Reassigned distributions efficiently combine a reduction of the interference terms provided by a well adapted smoothing kernel and an increased concentration of the signal components.
Robust Face Recognition Via Gabor Feature and Sparse Representation
Hao Yu-Juan
2016-01-01
Full Text Available Sparse representation based on compressed sensing theory has been widely used in the field of face recognition, and has achieved good recognition results. but the face feature extraction based on sparse representation is too simple, and the sparse coefficient is not sparse. In this paper, we improve the classification algorithm based on the fusion of sparse representation and Gabor feature, and then improved algorithm for Gabor feature which overcomes the problem of large dimension of the vector dimension, reduces the computation and storage cost, and enhances the robustness of the algorithm to the changes of the environment.The classification efficiency of sparse representation is determined by the collaborative representation,we simplify the sparse constraint based on L1 norm to the least square constraint, which makes the sparse coefficients both positive and reduce the complexity of the algorithm. Experimental results show that the proposed method is robust to illumination, facial expression and pose variations of face recognition, and the recognition rate of the algorithm is improved.
The protection of warranties and representations
Spence, C.D.; Thusoo, N.
1999-01-01
Most acquisition contracts within the oil and gas industry consist of representations and warranties. The legal distinction between representations and warranties was explained as follows: a representation is a statement of fact made by the representor before making the contract, but a warranty is a statement of fact which forms part of the terms of the contract. The paper outlines the nature of a representation or warranty and explains why certain warranties are not given. The protection offered by representations and warranties in breach of contract cases is also explained. Suggestions are offered for increasing protection by representations and warranties. 22 refs
An accurate method for the determination of unlike potential parameters from thermal diffusion data
El-Geubeily, S.
1997-01-01
A new method is introduced by means of which the unlike intermolecular potential parameters can be determined from the experimental measurements of the thermal diffusion factor as a function of temperature. The method proved to be easy, accurate, and applicable two-, three-, and four-parameter potential functions whose collision integrals are available. The potential parameters computed by this method are found to provide a faith full representation of the thermal diffusion data under consideration. 3 figs., 4 tabs
Accurate calculation of the geometric measure of entanglement for multipartite quantum states
Teng, Peiyuan
2017-07-01
This article proposes an efficient way of calculating the geometric measure of entanglement using tensor decomposition methods. The connection between these two concepts is explored using the tensor representation of the wavefunction. Numerical examples are benchmarked and compared. Furthermore, we search for highly entangled qubit states to show the applicability of this method.
Thuburn, J.; Woollings, T.J.
2005-01-01
Accurate representation of different kinds of wave motion is essential for numerical models of the atmosphere, but is sensitive to details of the discretization. In this paper, numerical dispersion relations are computed for different vertical discretizations of the compressible Euler equations and compared with the analytical dispersion relation. A height coordinate, an isentropic coordinate, and a terrain-following mass-based coordinate are considered, and, for each of these, different choices of prognostic variables and grid staggerings are considered. The discretizations are categorized according to whether their dispersion relations are optimal, are near optimal, have a single zero-frequency computational mode, or are problematic in other ways. Some general understanding of the factors that affect the numerical dispersion properties is obtained: heuristic arguments concerning the normal mode structures, and the amount of averaging and coarse differencing in the finite difference scheme, are shown to be useful guides to which configurations will be optimal; the number of degrees of freedom in the discretization is shown to be an accurate guide to the existence of computational modes; there is only minor sensitivity to whether the equations for thermodynamic variables are discretized in advective form or flux form; and an accurate representation of acoustic modes is found to be a prerequisite for accurate representation of inertia-gravity modes, which, in turn, is found to be a prerequisite for accurate representation of Rossby modes
Equipment upgrade - Accurate positioning of ion chambers
Doane, Harry J.; Nelson, George W.
1990-01-01
Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described
New fission-neutron-spectrum representation for ENDF
Madland, D.G.
1982-04-01
A new representation of the prompt fission neutron spectrum is proposed for use in the Evaluated Nuclear Data File (ENDF). The proposal is made because a new theory exists by which the spectrum can be accurately predicted as a function of the fissioning nucleus and its excitation energy. Thus, prompt fission neutron spectra can be calculated for cases where no measurements exist or where measurements are not possible. The mathematical formalism necessary for application of the new theory within ENDF is presented and discussed for neutron-induced fission and spontaneous fission. In the case of neutron-induced fission, expressions are given for the first-chance, second-chance, third-chance, and fourth-chance fission components of the spectrum together with that for the total spectrum. An ENDF format is proposed for the new fission spectrum representation, and an example of the use of the format is given
48 CFR 2009.570-4 - Representation.
2010-10-01
... type required by the organizational conflicts of interest representation provisions has previously been... ACQUISITION PLANNING CONTRACTOR QUALIFICATIONS Organizational Conflicts of Interest 2009.570-4 Representation... whether situations or relationships exist which may constitute organizational conflicts of interest with...
Goker Erdogan
2015-11-01
Full Text Available People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models-that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model's percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects' ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.
Reliability in the Location of Hindlimb Motor Representations in Fischer-344 Rats
Frost, Shawn B.; Iliakova, Maria; Dunham, Caleb; Barbay, Scott; Arnold, Paul; Nudo, Randolph J.
2014-01-01
Object The purpose of the present study was to determine the feasibility of using a common laboratory rat strain for locating cortical motor representations of the hindlimb reliably. Methods Intracortical Microstimulation (ICMS) techniques were used to derive detailed maps of the hindlimb motor representations in six adult Fischer-344 rats. Results The organization of the hindlimb movement representation, while variable across individuals in topographic detail, displayed several commonalities. The hindlimb representation was positioned posterior to the forelimb motor representation and postero-lateral to the motor trunk representation. The areal extent of the hindlimb representation across the cortical surface averaged 2.00 +/− 0.50 mm2. Superimposing individual maps revealed an overlapping area measuring 0.35 mm2, indicating that the location of the hindlimb representation can be predicted reliably based on stereotactic coordinates. Across the sample of rats, the hindlimb representation was found 1.25–3.75 mm posterior to Bregma, with an average center location ~ 2.6 mm posterior to Bregma. Likewise, the hindlimb representation was found 1–3.25 mm lateral to the midline, with an average center location ~ 2 mm lateral to midline. Conclusions The location of the cortical hindlimb motor representation in Fischer-344 rats can be reliably located based on its stereotactic position posterior to Bregma and lateral to the longitudinal skull suture at midline. The ability to accurately predict the cortical localization of functional hindlimb territories in a rodent model is important, as such animal models are being used increasingly in the development of brain-computer interfaces for restoration of function after spinal cord injury. PMID:23725395
Frost, Shawn B; Iliakova, Maria; Dunham, Caleb; Barbay, Scott; Arnold, Paul; Nudo, Randolph J
2013-08-01
The purpose of the present study was to determine the feasibility of using a common laboratory rat strain for reliably locating cortical motor representations of the hindlimb. Intracortical microstimulation techniques were used to derive detailed maps of the hindlimb motor representations in 6 adult Fischer-344 rats. The organization of the hindlimb movement representation, while variable across individual rats in topographic detail, displayed several commonalities. The hindlimb representation was positioned posterior to the forelimb motor representation and posterolateral to the motor trunk representation. The areal extent of the hindlimb representation across the cortical surface averaged 2.00 ± 0.50 mm(2). Superimposing individual maps revealed an overlapping area measuring 0.35 mm(2), indicating that the location of the hindlimb representation can be predicted reliably based on stereotactic coordinates. Across the sample of rats, the hindlimb representation was found 1.25-3.75 mm posterior to the bregma, with an average center location approximately 2.6 mm posterior to the bregma. Likewise, the hindlimb representation was found 1-3.25 mm lateral to the midline, with an average center location approximately 2 mm lateral to the midline. The location of the cortical hindlimb motor representation in Fischer-344 rats can be reliably located based on its stereotactic position posterior to the bregma and lateral to the longitudinal skull suture at midline. The ability to accurately predict the cortical localization of functional hindlimb territories in a rodent model is important, as such animal models are being increasingly used in the development of brain-computer interfaces for restoration of function after spinal cord injury.
Democracy and Representation in Paraguay
Liliana Rocío Duarte-Recalde
2017-05-01
Full Text Available This article reviews the electoral accountability dimension as a constitutive mechanism of Paraguayan democracy since 1989, analyzing the factors that limit the representation contained in the administration of the Paraguayan government as a result of the electoral process. We provide an analytic contrast between the democratic principles that guide the Paraguayan electoral institutions and the way their designs are enforced, identifying the gap between formal and informal rules as determinants of political representation. We also describe the barriers that prevent effective access of the population to political participation and competition, the advantages possessed by traditional political parties and interest groups, as well as their implications for democracy. We also review the degree to which elected officials are representative of historically excluded social groups as a result, emphasizing the way women, indigenous and peasant communities have potentially limited power to exercise political influence due to limitations to participation by structural and institutional factors.
Time representations in social science.
Schulz, Yvan
2012-12-01
Time has long been a major topic of study in social science, as in other sciences or in philosophy. Social scientists have tended to focus on collective representations of time, and on the ways in which these representations shape our everyday experiences. This contribution addresses work from such disciplines as anthropology, sociology and history. It focuses on several of the main theories that have preoccupied specialists in social science, such as the alleged "acceleration" of life and overgrowth of the present in contemporary Western societies, or the distinction between so-called linear and circular conceptions of time. The presentation of these theories is accompanied by some of the critiques they have provoked, in order to enable the reader to form her or his own opinion of them.
Quantum control and representation theory
Ibort, A; Perez-Pardo, J M
2009-01-01
A new notion of controllability for quantum systems that takes advantage of the linear superposition of quantum states is introduced. We call such a notion von Neumann controllability, and it is shown that it is strictly weaker than the usual notion of pure state and operator controllability. We provide a simple and effective characterization of it by using tools from the theory of unitary representations of Lie groups. In this sense, we are able to approach the problem of control of quantum states from a new perspective, that of the theory of unitary representations of Lie groups. A few examples of physical interest and the particular instances of compact and nilpotent dynamical Lie groups are discussed
Berry phase in Heisenberg representation
Andreev, V. A.; Klimov, Andrei B.; Lerner, Peter B.
1994-01-01
We define the Berry phase for the Heisenberg operators. This definition is motivated by the calculation of the phase shifts by different techniques. These techniques are: the solution of the Heisenberg equations of motion, the solution of the Schrodinger equation in coherent-state representation, and the direct computation of the evolution operator. Our definition of the Berry phase in the Heisenberg representation is consistent with the underlying supersymmetry of the model in the following sense. The structural blocks of the Hamiltonians of supersymmetrical quantum mechanics ('superpairs') are connected by transformations which conserve the similarity in structure of the energy levels of superpairs. These transformations include transformation of phase of the creation-annihilation operators, which are generated by adiabatic cyclic evolution of the parameters of the system.
Representation theory of finite monoids
Steinberg, Benjamin
2016-01-01
This first text on the subject provides a comprehensive introduction to the representation theory of finite monoids. Carefully worked examples and exercises provide the bells and whistles for graduate accessibility, bringing a broad range of advanced readers to the forefront of research in the area. Highlights of the text include applications to probability theory, symbolic dynamics, and automata theory. Comfort with module theory, a familiarity with ordinary group representation theory, and the basics of Wedderburn theory, are prerequisites for advanced graduate level study. Researchers in algebra, algebraic combinatorics, automata theory, and probability theory, will find this text enriching with its thorough presentation of applications of the theory to these fields. Prior knowledge of semigroup theory is not expected for the diverse readership that may benefit from this exposition. The approach taken in this book is highly module-theoretic and follows the modern flavor of the theory of finite dimensional ...
Temporal Representation in Semantic Graphs
Levandoski, J J; Abdulla, G M
2007-08-07
A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.
Experience representation in information systems
Kaczmarek, Jan
2014-01-01
This thesis looks into the ways subjective dimension of experience could be represented in artificial, non-biological systems, in particular information systems. The pivotal assumption is that experience as opposed to mainstream thinking in information science is not equal to knowledge, so that experience is a broader term which encapsulates both knowledge and subjective, affective component of experience, which so far has not been properly embraced by knowledge representation theories. This ...
Experience representation in information systems
Kaczmarek, Jan
2014-01-01
This thesis looks into the ways subjective dimension of experience could be represented in artificial, non-biological systems, in particular information systems. The pivotal assumption is that experience as opposed to mainstream thinking in information science is not equal to knowledge, so that experience is a broader term which encapsulates both knowledge and subjective, affective component of experience, which so far has not been properly embraced by knowledge representation theories. Th...
Computing Visible-Surface Representations,
1985-03-01
Terzopoulos N00014-75-C-0643 9. PERFORMING ORGANIZATION NAME AMC ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Artificial Inteligence Laboratory AREA A...Massachusetts Institute of lechnolog,. Support lbr the laboratory’s Artificial Intelligence research is provided in part by the Advanced Rtccarcl Proj...dynamically maintaining visible surface representations. Whether the intention is to model human vision or to design competent artificial vision systems
Generalized oscillator representations for Calogero Hamiltonians
Tyutin, I V; Voronov, B L
2013-01-01
This paper is a natural continuation of the previous paper (Gitman et al 2011 J. Phys. A: Math. Theor. 44 425204), where oscillator representations for nonnegative Calogero Hamiltonians with coupling constant α ⩾ − 1/4 were constructed. In this paper, we present generalized oscillator representations for all Calogero Hamiltonians with α ⩾ − 1/4. These representations are generally highly nonunique, but there exists an optimum representation for each Hamiltonian. (comment)
Learning semantic histopathological representation for basal cell carcinoma classification
Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo
2013-03-01
Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.
Neural Representations of Physics Concepts.
Mason, Robert A; Just, Marcel Adam
2016-06-01
We used functional MRI (fMRI) to assess neural representations of physics concepts (momentum, energy, etc.) in juniors, seniors, and graduate students majoring in physics or engineering. Our goal was to identify the underlying neural dimensions of these representations. Using factor analysis to reduce the number of dimensions of activation, we obtained four physics-related factors that were mapped to sets of voxels. The four factors were interpretable as causal motion visualization, periodicity, algebraic form, and energy flow. The individual concepts were identifiable from their fMRI signatures with a mean rank accuracy of .75 using a machine-learning (multivoxel) classifier. Furthermore, there was commonality in participants' neural representation of physics; a classifier trained on data from all but one participant identified the concepts in the left-out participant (mean accuracy = .71 across all nine participant samples). The findings indicate that abstract scientific concepts acquired in an educational setting evoke activation patterns that are identifiable and common, indicating that science education builds abstract knowledge using inherent, repurposed brain systems. © The Author(s) 2016.
Accurate genotyping across variant classes and lengths using variant graphs
Sibbesen, Jonas Andreas; Maretty, Lasse; Jensen, Jacob Malte
2018-01-01
of read k-mers to a graph representation of the reference and variants to efficiently perform unbiased, probabilistic genotyping across the variation spectrum. We demonstrate that BayesTyper generally provides superior variant sensitivity and genotyping accuracy relative to existing methods when used...... collecting a set of candidate variants across discovery methods, individuals and databases, and then realigning the reads to the variants and reference simultaneously. However, this realignment problem has proved computationally difficult. Here, we present a new method (BayesTyper) that uses exact alignment...... to integrate variants across discovery approaches and individuals. Finally, we demonstrate that including a ‘variation-prior’ database containing already known variants significantly improves sensitivity....
Impossibility Theorem in Proportional Representation Problem
Karpov, Alexander
2010-01-01
The study examines general axiomatics of Balinski and Young and analyzes existed proportional representation methods using this approach. The second part of the paper provides new axiomatics based on rational choice models. New system of axioms is applied to study known proportional representation systems. It is shown that there is no proportional representation method satisfying a minimal set of the axioms (monotonicity and neutrality).
Facilitating Mathematical Practices through Visual Representations
Murata, Aki; Stewart, Chana
2017-01-01
Effective use of mathematical representation is key to supporting student learning. In "Principles to Actions: Ensuring Mathematical Success for All" (NCTM 2014), "use and connect mathematical representations" is one of the effective Mathematics Teaching Practices. By using different representations, students examine concepts…
Computability and Representations of the Zero Set
P.J. Collins (Pieter)
2008-01-01
htmlabstractIn this note we give a new representation for closed sets under which the robust zero set of a function is computable. We call this representation the component cover representation. The computation of the zero set is based on topological index theory, the most powerful tool for finding
Lifts of matroid representations over partial fields
Pendavingh, R.A.; Zwam, van S.H.M.
2010-01-01
There exist several theorems which state that when a matroid is representable over distinct fields F1,...,Fk , it is also representable over other fields. We prove a theorem, the Lift Theorem, that implies many of these results. First, parts of Whittle's characterization of representations of
Equivalence of rational representations of behaviors
Gottimukkala, Sasanka; Fiaz, Shaik; Trentelman, H.L.
This article deals with the equivalence of representations of behaviors of linear differential systems. In general, the behavior of a given linear differential system has many different representations. In this paper we restrict ourselves to kernel and image representations. Two kernel
Accurate phylogenetic tree reconstruction from quartets: a heuristic approach.
Reaz, Rezwana; Bayzid, Md Shamsuzzoha; Rahman, M Sohel
2014-01-01
Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A 'quartet' is an unrooted tree over 4 taxa, hence the quartet-based supertree methods combine many 4-taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets.
HYBRID REPRESENTATION OF DIGITAL MOCKUP FOR HERITAGE BUILDINGS MANAGEMENT
G. Nicolas
2013-07-01
Full Text Available This article deals with the implementation of tool allowing the portability of the digital mock-up for architectural projects on the building renovation place and the use of representation layers giving functions adapted to the different workers open to work in this place. Our test case is applied to renovation works on old windows in an ancient abbey where it is necessary to improve the thermal efficiency.
Alternative approach to nuclear data representation
Pruet, J.; Brown, D.; Beck, B.; McNabb, D.P.
2006-01-01
This paper considers an approach for representing nuclear data that is qualitatively different from the approach currently adopted by the nuclear science community. Specifically, we examine a representation in which complicated data is described through collections of distinct and self-contained simple data structures. This structure-based representation is compared with the ENDF and ENDL formats, which can be roughly characterized as dictionary-based representations. A pilot data representation for replacing the format currently used at LLNL is presented. Examples are given as is a discussion of promises and shortcomings associated with moving from traditional dictionary-based formats to a structure-rich or class-like representation
On the phase space representations. 1
Polubarinov, I.V.
1978-01-01
The Dirac representation theory deals usually with the amplitude formalism of the quantum theory. An introduction is given into a theory of some other representations, which are applicable in the density matrix formalism and can naturally be called phase space representations (PSR). They use terms of phase space variables (x and p simultaneously) and give a description, close to the classical phase space description. Definitions and algebraic properties are given in quantum mechanics for such PSRs as the Wigner representation, coherent state representation and others. Completeness relations of a matrix type are used as a starting point. The case of quantum field theory is also outlined
Quantum-Accurate Molecular Dynamics Potential for Tungsten
Wood, Mitchell; Thompson, Aidan P.
2017-03-01
The purpose of this short contribution is to report on the development of a Spectral Neighbor Analysis Potential (SNAP) for tungsten. We have focused on the characterization of elastic and defect properties of the pure material in order to support molecular dynamics simulations of plasma-facing materials in fusion reactors. A parallel genetic algorithm approach was used to efficiently search for fitting parameters optimized against a large number of objective functions. In addition, we have shown that this many-body tungsten potential can be used in conjunction with a simple helium pair potential1 to produce accurate defect formation energies for the W-He binary system.
Accurate characterization of OPVs: Device masking and different solar simulators
Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.
2013-01-01
One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Yilang Shen
2017-08-01
Full Text Available In geographic information systems, the reliability of querying, analysing, or reasoning results depends on the data quality. One central criterion of data quality is consistency, and identifying inconsistencies is crucial for maintaining the integrity of spatial data from multiple sources or at multiple resolutions. In traditional methods of consistency assessment, vector data are used as the primary experimental data. In this manuscript, we describe the use of a new type of raster data, tile maps, to access the consistency of information from multiscale representations of the water bodies that make up drainage systems. We describe a hierarchical methodology to determine the spatial consistency of tile-map datasets that display water areas in a raster format. Three characteristic indices, the degree of global feature consistency, the degree of local feature consistency, and the degree of overlap, are proposed to measure the consistency of multiscale representations of water areas. The perceptual hash algorithm and the scale-invariant feature transform (SIFT descriptor are applied to extract and measure the global and local features of water areas. By performing combined calculations using these three characteristic indices, the degrees of consistency of multiscale representations of water areas can be divided into five grades: exactly consistent, highly consistent, moderately consistent, less consistent, and inconsistent. For evaluation purposes, the proposed method is applied to several test areas from the Tiandi map of China. In addition, we identify key technologies that are related to the process of extracting water areas from a tile map. The accuracy of the consistency assessment method is evaluated, and our experimental results confirm that the proposed methodology is efficient and accurate.
Unitary Representations of Gauge Groups
Huerfano, Ruth Stella
I generalize to the case of gauge groups over non-trivial principal bundles representations that I. M. Gelfand, M. I. Graev and A. M. Versik constructed for current groups. The gauge group of the principal G-bundle P over M, (G a Lie group with an euclidean structure, M a compact, connected and oriented manifold), as the smooth sections of the associated group bundle is presented and studied in chapter I. Chapter II describes the symmetric algebra associated to a Hilbert space, its Hilbert structure, a convenient exponential and a total set that later play a key role in the construction of the representation. Chapter III is concerned with the calculus needed to make the space of Lie algebra valued 1-forms a Gaussian L^2-space. This is accomplished by studying general projective systems of finitely measurable spaces and the corresponding systems of sigma -additive measures, all of these leading to the description of a promeasure, a concept modeled after Bourbaki and classical measure theory. In the case of a locally convex vector space E, the corresponding Fourier transform, family of characters and the existence of a promeasure for every quadratic form on E^' are established, so the Gaussian L^2-space associated to a real Hilbert space is constructed. Chapter III finishes by exhibiting the explicit Hilbert space isomorphism between the Gaussian L ^2-space associated to a real Hilbert space and the complexification of its symmetric algebra. In chapter IV taking as a Hilbert space H the L^2-space of the Lie algebra valued 1-forms on P, the gauge group acts on the motion group of H defining in an straight forward fashion the representation desired.
Wigner representation in scattering problems
Remler, E.A.
1975-01-01
The basic equations of quantum scattering are translated into the Wigner representation. This puts quantum mechanics in the form of a stochastic process in phase space. Instead of complex valued wavefunctions and transition matrices, one now works with real-valued probability distributions and source functions, objects more responsive to physical intuition. Aside from writing out certain necessary basic expressions, the main purpose is to develop and stress the interpretive picture associated with this representation and to derive results used in applications published elsewhere. The quasiclassical guise assumed by the formalism lends itself particularly to approximations of complex multiparticle scattering problems is laid. The foundation for a systematic application of statistical approximations to such problems. The form of the integral equation for scattering as well as its mulitple scattering expansion in this representation are derived. Since this formalism remains unchanged upon taking the classical limit, these results also constitute a general treatment of classical multiparticle collision theory. Quantum corrections to classical propogators are discussed briefly. The basic approximation used in the Monte Carlo method is derived in a fashion that allows for future refinement and includes bound state production. The close connection that must exist between inclusive production of a bound state and of its constituents is brought out in an especially graphic way by this formalism. In particular one can see how comparisons between such cross sections yield direct physical insight into relevant production mechanisms. A simple illustration of scattering by a bound two-body system is treated. Simple expressions for single- and double-scattering contributions to total and differential cross sections, as well as for all necessary shadow corrections thereto, are obtained and compared to previous results of Glauber and Goldberger
Sanz López, Josep Maria
2017-10-09
A growing academic discussion has focused on how, in a globalized world, LGBTQ identities are shaped and influenced by different and international actors, such as the media. This article analyzes how LGBTQ people from a rural region of a Western country-Spain-feel toward their representations on TV series from English-speaking countries. Employing a qualitative approach, this research aims to depict whether the academic conceptualizations to analyze these identity conformation processes are accurate. In addition, it explores how dominating media representations are being adapted in a region that, although within the West, can serve a context of a very different nature. The results found that a major rejection of the TV series representations among participants can suggest both an inaccuracy of the conceptualizations used by some scholars to understand LGBTQ flows and a problematic LGBTQ representation in media products that goes beyond regions and spaces.
A roadmap for improving the representation of photosynthesis in Earth system models.
Rogers, Alistair; Medlyn, Belinda E; Dukes, Jeffrey S; Bonan, Gordon; von Caemmerer, Susanne; Dietze, Michael C; Kattge, Jens; Leakey, Andrew D B; Mercado, Lina M; Niinemets, Ülo; Prentice, I Colin; Serbin, Shawn P; Sitch, Stephen; Way, Danielle A; Zaehle, Sönke
2017-01-01
Accurate representation of photosynthesis in terrestrial biosphere models (TBMs) is essential for robust projections of global change. However, current representations vary markedly between TBMs, contributing uncertainty to projections of global carbon fluxes. Here we compared the representation of photosynthesis in seven TBMs by examining leaf and canopy level responses of photosynthetic CO 2 assimilation (A) to key environmental variables: light, temperature, CO 2 concentration, vapor pressure deficit and soil water content. We identified research areas where limited process knowledge prevents inclusion of physiological phenomena in current TBMs and research areas where data are urgently needed for model parameterization or evaluation. We provide a roadmap for new science needed to improve the representation of photosynthesis in the next generation of terrestrial biosphere and Earth system models. No claim to original US Government works New Phytologist © 2016 New Phytologist Trust.
Spectral representation in stochastic quantization
Nakazato, Hiromichi.
1988-10-01
A spectral representation of stationary 2-point functions is investigated based on the operator formalism in stochastic quantization. Assuming the existence of asymptotic non-interacting fields, we can diagonalize the total Hamiltonian in terms of asymptotic fields and show that the correlation length along the fictious time is proportional to the physical mass expected in the usual field theory. A relation between renormalization factors in the operator formalism is derived as a byproduct and its validity is checked with the perturbative results calculated in this formalism. (orig.)
Multimedia ontology representation and applications
Chaudhury, Santanu; Ghosh, Hiranmay
2015-01-01
The result of more than 15 years of collective research, Multimedia Ontology: Representation and Applications provides a theoretical foundation for understanding the nature of media data and the principles involved in its interpretation. The book presents a unified approach to recent advances in multimedia and explains how a multimedia ontology can fill the semantic gap between concepts and the media world. It relays real-life examples of implementations in different domains to illustrate how this gap can be filled.The book contains information that helps with building semantic, content-based
Statistical representation of quantum states
Montina, A [Dipartimento di Fisica, Universita di Firenze, Via Sansone 1, 50019 Sesto Fiorentino (Italy)
2007-05-15
In the standard interpretation of quantum mechanics, the state is described by an abstract wave function in the representation space. Conversely, in a realistic interpretation, the quantum state is replaced by a probability distribution of physical quantities. Bohm mechanics is a consistent example of realistic theory, where the wave function and the particle positions are classically defined quantities. Recently, we proved that the probability distribution in a realistic theory cannot be a quadratic function of the quantum state, in contrast to the apparently obvious suggestion given by the Born rule for transition probabilities. Here, we provide a simplified version of this proof.
On the Benefits of Divergent Search for Evolved Representations
Lehman, Joel; Risi, Sebastian; Stanley, Kenneth O
2012-01-01
Evolved representations in evolutionary computation are often fragile, which can impede representation-dependent mechanisms such as self-adaptation. In contrast, evolved representations in nature are robust, evolvable, and creatively exploit available representational features. This paper provide...
Feng, Y.; Sardei, F.; Kisslinger, J.
2005-01-01
The paper presents a new simple and accurate numerical field-line mapping technique providing a high-quality representation of field lines as required by a Monte Carlo modeling of plasma edge transport in the complex magnetic boundaries of three-dimensional (3D) toroidal fusion devices. Using a toroidal sequence of precomputed 3D finite flux-tube meshes, the method advances field lines through a simple bilinear, forward/backward symmetric interpolation at the interfaces between two adjacent flux tubes. It is a reversible field-line mapping (RFLM) algorithm ensuring a continuous and unique reconstruction of field lines at any point of the 3D boundary. The reversibility property has a strong impact on the efficiency of modeling the highly anisotropic plasma edge transport in general closed or open configurations of arbitrary ergodicity as it avoids artificial cross-field diffusion of the fast parallel transport. For stellarator-symmetric magnetic configurations, which are the standard case for stellarators, the reversibility additionally provides an average cancellation of the radial interpolation errors of field lines circulating around closed magnetic flux surfaces. The RFLM technique has been implemented in the 3D edge transport code EMC3-EIRENE and is used routinely for plasma transport modeling in the boundaries of several low-shear and high-shear stellarators as well as in the boundary of a tokamak with 3D magnetic edge perturbations
Sulc, Miroslav; Hernandez, Henar; Martinez, Todd J.; Vanicek, Jiri
2014-03-01
We recently showed that the Dephasing Representation (DR) provides an efficient tool for computing ultrafast electronic spectra and that cellularization yields further acceleration [M. Šulc and J. Vaníček, Mol. Phys. 110, 945 (2012)]. Here we focus on increasing its accuracy by first implementing an exact Gaussian basis method (GBM) combining the accuracy of quantum dynamics and efficiency of classical dynamics. The DR is then derived together with ten other methods for computing time-resolved spectra with intermediate accuracy and efficiency. These include the Gaussian DR (GDR), an exact generalization of the DR, in which trajectories are replaced by communicating frozen Gaussians evolving classically with an average Hamiltonian. The methods are tested numerically on time correlation functions and time-resolved stimulated emission spectra in the harmonic potential, pyrazine S0 /S1 model, and quartic oscillator. Both the GBM and the GDR are shown to increase the accuracy of the DR. Surprisingly, in chaotic systems the GDR can outperform the presumably more accurate GBM, in which the two bases evolve separately. This research was supported by the Swiss NSF Grant No. 200021_124936/1 and NCCR Molecular Ultrafast Science & Technology (MUST), and by the EPFL.
More accurate picture of human body organs
Kolar, J.
1985-01-01
Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)
Fast and accurate methods for phylogenomic analyses
Warnow Tandy
2011-10-01
Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency
Accurate activity recognition in a home setting
van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B.
2008-01-01
A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its
Highly accurate surface maps from profilometer measurements
Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.
2013-04-01
Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.
A Subdivision-Based Representation for Vector Image Editing.
Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou
2012-11-01
Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.
Vision and the representation of the surroundings in spatial memory.
Tatler, Benjamin W; Land, Michael F
2011-02-27
One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience.
Implicit geometric representations for optimal design of gas turbine blades
Mansour, T.; Ghaly, W.
2004-01-01
Shape optimization requires a proper geometric representation of the blade profile; the parameters of such a representation are usually taken as design variables in the optimization process. This implies that the model must possess three specific features: flexibility, efficiency, and accuracy. For the specific task of aerodynamic optimization for turbine blades, it is critical to have flexibility in both the global and local design spaces in order to obtain a successful optimization. This work is concerned with the development of two geometric representations of turbine blade profiles that are appropriate for aerodynamic optimization: the Modified Rapid Axial Turbine Design (MRATD) model where the blade is represented by five low-order curves that satisfy eleven designer parameters; this model is suitable for a global search of the design space. The second model is NURBS parameterization of the blade profile that can be used for a local refinement. The two models are presented and are assessed for flexibility and accuracy when representing several typical turbine blade profiles. The models will be further discussed in terms of curve smoothness and blade shape representation with a multi-NURBS curve versus one curve and its effect on the flow field, in particular the pressure distribution along the blade surfaces, will be elaborated. (author)
From scenarios to domain models: processes and representations
Haddock, Gail; Harbison, Karan
1994-03-01
The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.
Vision and the representation of the surroundings in spatial memory
Tatler, Benjamin W.; Land, Michael F.
2011-01-01
One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience. PMID:21242146
Human action recognition using trajectory-based representation
Haiam A. Abdul-Azim
2015-07-01
Full Text Available Recognizing human actions in video sequences has been a challenging problem in the last few years due to its real-world applications. A lot of action representation approaches have been proposed to improve the action recognition performance. Despite the popularity of local features-based approaches together with “Bag-of-Words” model for action representation, it fails to capture adequate spatial or temporal relationships. In an attempt to overcome this problem, a trajectory-based local representation approaches have been proposed to capture the temporal information. This paper introduces an improvement of trajectory-based human action recognition approaches to capture discriminative temporal relationships. In our approach, we extract trajectories by tracking the detected spatio-temporal interest points named “cuboid features” with matching its SIFT descriptors over the consecutive frames. We, also, propose a linking and exploring method to obtain efficient trajectories for motion representation in realistic conditions. Then the volumes around the trajectories’ points are described to represent human actions based on the Bag-of-Words (BOW model. Finally, a support vector machine is used to classify human actions. The effectiveness of the proposed approach was evaluated on three popular datasets (KTH, Weizmann and UCF sports. Experimental results showed that the proposed approach yields considerable performance improvement over the state-of-the-art approaches.
Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?
Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim
2014-11-01
Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).
Representation theory a first course
Fulton, William
1991-01-01
The primary goal of these lectures is to introduce a beginner to the finite dimensional representations of Lie groups and Lie algebras. Since this goal is shared by quite a few other books, we should explain in this Preface how our approach differs, although the potential reader can probably see this better by a quick browse through the book. Representation theory is simple to define: it is the study of the ways in which a given group may act on vector spaces. It is almost certainly unique, however, among such clearly delineated subjects, in the breadth of its interest to mathematicians. This is not surprising: group actions are ubiquitous in 20th century mathematics, and where the object on which a group acts is not a vector space, we have learned to replace it by one that is {e. g. , a cohomology group, tangent space, etc. }. As a consequence, many mathematicians other than specialists in the field {or even those who think they might want to be} come in contact with the subject in various ways. It is for ...
Quiver representations and quiver varieties
Jr, Alexander Kirillov
2016-01-01
This book is an introduction to the theory of quiver representations and quiver varieties, starting with basic definitions and ending with Nakajima's work on quiver varieties and the geometric realization of Kac-Moody Lie algebras. The first part of the book is devoted to the classical theory of quivers of finite type. Here the exposition is mostly self-contained and all important proofs are presented in detail. The second part contains the more recent topics of quiver theory that are related to quivers of infinite type: Coxeter functor, tame and wild quivers, McKay correspondence, and representations of Euclidean quivers. In the third part, topics related to geometric aspects of quiver theory are discussed, such as quiver varieties, Hilbert schemes, and the geometric realization of Kac-Moody algebras. Here some of the more technical proofs are omitted; instead only the statements and some ideas of the proofs are given, and the reader is referred to original papers for details. The exposition in the book requ...
Spacetime representation of topological phononics
Deymier, Pierre A.; Runge, Keith; Lucas, Pierre; Vasseur, Jérôme O.
2018-05-01
Non-conventional topology of elastic waves arises from breaking symmetry of phononic structures either intrinsically through internal resonances or extrinsically via application of external stimuli. We develop a spacetime representation based on twistor theory of an intrinsic topological elastic structure composed of a harmonic chain attached to a rigid substrate. Elastic waves in this structure obey the Klein–Gordon and Dirac equations and possesses spinorial character. We demonstrate the mapping between straight line trajectories of these elastic waves in spacetime and the twistor complex space. The twistor representation of these Dirac phonons is related to their topological and fermion-like properties. The second topological phononic structure is an extrinsic structure composed of a one-dimensional elastic medium subjected to a moving superlattice. We report an analogy between the elastic behavior of this time-dependent superlattice, the scalar quantum field theory and general relativity of two types of exotic particle excitations, namely temporal Dirac phonons and temporal ghost (tachyonic) phonons. These phonons live on separate sides of a two-dimensional frequency space and are delimited by ghost lines reminiscent of the conventional light cone. Both phonon types exhibit spinorial amplitudes that can be measured by mapping the particle behavior to the band structure of elastic waves.
Spatial Representation of Ordinal Information
Meng eZhang
2016-04-01
Full Text Available Right hand responds faster than left hand when shown larger numbers and vice-versa when shown smaller numbers (the SNARC effect. Accumulating evidence suggests that the SNARC effect may not be exclusive for numbers and can be extended to other ordinal sequences (e.g., months or letters in the alphabet as well. In this study, we tested the SNARC effect with a non-numerically ordered sequence: The Chinese notations for the color spectrum (Red, Orange, Yellow, Green, Blue, Indigo, and Violet. Chinese color word sequence reserves relatively weak ordinal information, because each element color in the sequence normally appears in non-sequential contexts, making it ideal to test the spatial organization of sequential information that was stored in the long-term memory. This study found a reliable SNARC-like effect for Chinese color words (deciding whether the presented color word was before or after the reference color word green, suggesting that, without access to any quantitative information or exposure to any previous training, ordinal representation can still activate a sense of space. The results support that weak ordinal information without quantitative magnitude encoded in the long-term memory can activate spatial representation in a comparison task.
Spatial Representation of Ordinal Information.
Zhang, Meng; Gao, Xuefei; Li, Baichen; Yu, Shuyuan; Gong, Tianwei; Jiang, Ting; Hu, Qingfen; Chen, Yinghe
2016-01-01
Right hand responds faster than left hand when shown larger numbers and vice-versa when shown smaller numbers (the SNARC effect). Accumulating evidence suggests that the SNARC effect may not be exclusive for numbers and can be extended to other ordinal sequences (e.g., months or letters in the alphabet) as well. In this study, we tested the SNARC effect with a non-numerically ordered sequence: the Chinese notations for the color spectrum (Red, Orange, Yellow, Green, Blue, Indigo, and Violet). Chinese color word sequence reserves relatively weak ordinal information, because each element color in the sequence normally appears in non-sequential contexts, making it ideal to test the spatial organization of sequential information that was stored in the long-term memory. This study found a reliable SNARC-like effect for Chinese color words (deciding whether the presented color word was before or after the reference color word "green"), suggesting that, without access to any quantitative information or exposure to any previous training, ordinal representation can still activate a sense of space. The results support that weak ordinal information without quantitative magnitude encoded in the long-term memory can activate spatial representation in a comparison task.
Cortical representations of communication sounds.
Heiser, Marc A; Cheung, Steven W
2008-10-01
This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.
Visual representations of Iranian transgenders.
Shakerifar, Elhum
2011-01-01
Transsexuality in Iran has gained much attention and media coverage in the past few years, particularly in its questionable depiction as a permitted loophole for homosexuality, which is prohibited under Iran's Islamic-inspired legal system. Of course, attention in the West is also encouraged by the “shock” that sex change is available in Iran, a country that Western media and society delights in portraying as monolithically repressive. As a result, Iranian filmmakers inevitably have their own agendas, which are unsurprisingly brought into the film making process—from a desire to sell a product that will appeal to the Western market, to films that endorse specific socio-political agendas. This paper is an attempt to situate sex change and representations of sex change in Iran within a wider theoretical framework than the frequently reiterated conflation with homosexuality, and to open and engage with a wider debate concerning transsexuality in Iran, as well as to specifically analyze the representation of transexuality, in view of its current prominent presence in media.
An accurate determination of the flux within a slab
Ganapol, B.D.; Lapenta, G.
1993-01-01
During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available
Accurate and approximate thermal rate constants for polyatomic chemical reactions
Nyman, Gunnar
2007-01-01
In favourable cases it is possible to calculate thermal rate constants for polyatomic reactions to high accuracy from first principles. Here, we discuss the use of flux correlation functions combined with the multi-configurational time-dependent Hartree (MCTDH) approach to efficiently calculate cumulative reaction probabilities and thermal rate constants for polyatomic chemical reactions. Three isotopic variants of the H 2 + CH 3 → CH 4 + H reaction are used to illustrate the theory. There is good agreement with experimental results although the experimental rates generally are larger than the calculated ones, which are believed to be at least as accurate as the experimental rates. Approximations allowing evaluation of the thermal rate constant above 400 K are treated. It is also noted that for the treated reactions, transition state theory (TST) gives accurate rate constants above 500 K. TST theory also gives accurate results for kinetic isotope effects in cases where the mass of the transfered atom is unchanged. Due to neglect of tunnelling, TST however fails below 400 K if the mass of the transferred atom changes between the isotopic reactions
Cheng, Hong
2015-01-01
This unique text/reference presents a comprehensive review of the state of the art in sparse representations, modeling and learning. The book examines both the theoretical foundations and details of algorithm implementation, highlighting the practical application of compressed sensing research in visual recognition and computer vision. Topics and features: provides a thorough introduction to the fundamentals of sparse representation, modeling and learning, and the application of these techniques in visual recognition; describes sparse recovery approaches, robust and efficient sparse represen
[Children with developmental coordination disorder have difficulty with action representation].
Gabbard, Carl; Cacola, Priscila
The study of children with developmental coordination disorder (DCD) has emerged as a vibrant line of inquiry over the last two decades. The literature indicates quite clearly that children with DCD display deficits with an array of perceptual-motor and daily living skills. The movements of children with DCD are often described as clumsy and uncoordinated and lead to difficulties with performing many of the activities of daily living and sports that typically developing children perform easily. It has been hypothesized, based on limited research, that an underlying problem is a deficit in generating and/or monitoring an action representation termed the internal modeling deficit hypothesis. According to the hypothesis, children with DCD have significant limitations in their ability to accurately generate and utilize internal models of motor planning and control. The focus of this review is on one of the methods used to examine action representation-motor imagery, which theorists argue provides a window into the process of action representation. Included are research methods and possible brain structures involved. An addition, a paradigm unique with this population-estimation of reachability (distance) via motor imagery, will be described.
Changing predictions, stable recognition: Children's representations of downward incline motion.
Hast, Michael; Howe, Christine
2017-11-01
Various studies to-date have demonstrated children hold ill-conceived expressed beliefs about the physical world such as that one ball will fall faster than another because it is heavier. At the same time, they also demonstrate accurate recognition of dynamic events. How these representations relate is still unresolved. This study examined 5- to 11-year-olds' (N = 130) predictions and recognition of motion down inclines. Predictions were typically in error, matching previous work, but children largely recognized correct events as correct and rejected incorrect ones. The results also demonstrate while predictions change with increasing age, recognition shows signs of stability. The findings provide further support for a hybrid model of object representations and argue in favour of stable core cognition existing alongside developmental changes. Statement of contribution What is already known on this subject? Children's predictions of physical events show limitations in accuracy Their recognition of such events suggests children may use different knowledge sources in their reasoning What the present study adds? Predictions fluctuate more strongly than recognition, suggesting stable core cognition But recognition also shows some fluctuation, arguing for a hybrid model of knowledge representation. © 2017 The British Psychological Society.
Accurate guitar tuning by cochlear implant musicians.
Thomas Lu
Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Accurate estimation of indoor travel times
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan
2014-01-01
The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...
Sparse representation and Bayesian detection of genome copy number alterations from microarray data.
Pique-Regi, Roger; Monso-Varona, Jordi; Ortega, Antonio; Seeger, Robert C; Triche, Timothy J; Asgharzadeh, Shahab
2008-02-01
Genomic instability in cancer leads to abnormal genome copy number alterations (CNA) that are associated with the development and behavior of tumors. Advances in microarray technology have allowed for greater resolution in detection of DNA copy number changes (amplifications or deletions) across the genome. However, the increase in number of measured signals and accompanying noise from the array probes present a challenge in accurate and fast identification of breakpoints that define CNA. This article proposes a novel detection technique that exploits the use of piece wise constant (PWC) vectors to represent genome copy number and sparse Bayesian learning (SBL) to detect CNA breakpoints. First, a compact linear algebra representation for the genome copy number is developed from normalized probe intensities. Second, SBL is applied and optimized to infer locations where copy number changes occur. Third, a backward elimination (BE) procedure is used to rank the inferred breakpoints; and a cut-off point can be efficiently adjusted in this procedure to control for the false discovery rate (FDR). The performance of our algorithm is evaluated using simulated and real genome datasets and compared to other existing techniques. Our approach achieves the highest accuracy and lowest FDR while improving computational speed by several orders of magnitude. The proposed algorithm has been developed into a free standing software application (GADA, Genome Alteration Detection Algorithm). http://biron.usc.edu/~piquereg/GADA
On accurate determination of contact angle
Concus, P.; Finn, R.
1992-01-01
Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets
Highly Accurate Prediction of Jobs Runtime Classes
Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi
2016-01-01
Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...
Accurate multiplicity scaling in isotopically conjugate reactions
Golokhvastov, A.I.
1989-01-01
The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs
Mental models accurately predict emotion transitions
Thornton, Mark A.; Tamir, Diana I.
2017-01-01
Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373
Braid group representation on quantum computation
Aziz, Ryan Kasyfil, E-mail: kasyfilryan@gmail.com [Department of Computational Sciences, Bandung Institute of Technology (Indonesia); Muchtadi-Alamsyah, Intan, E-mail: ntan@math.itb.ac.id [Algebra Research Group, Bandung Institute of Technology (Indonesia)
2015-09-30
There are many studies about topological representation of quantum computation recently. One of diagram representation of quantum computation is by using ZX-Calculus. In this paper we will make a diagrammatical scheme of Dense Coding. We also proved that ZX-Calculus diagram of maximally entangle state satisfies Yang-Baxter Equation and therefore, we can construct a Braid Group representation of set of maximally entangle state.
Iterative Adaptive Sampling For Accurate Direct Illumination
Donikian, Michael
2004-01-01
This thesis introduces a new multipass algorithm, Iterative Adaptive Sampling, for efficiently computing the direct illumination in scenes with many lights, including area lights that cause realistic soft shadows...
Knowledge Representation in Travelling Texts
Mousten, Birthe; Locmele, Gunta
2014-01-01
Today, information travels fast. Texts travel, too. In a corporate context, the question is how to manage which knowledge elements should travel to a new language area or market and in which form? The decision to let knowledge elements travel or not travel highly depends on the limitation...... and the purpose of the text in a new context as well as on predefined parameters for text travel. For texts used in marketing and in technology, the question is whether culture-bound knowledge representation should be domesticated or kept as foreign elements, or should be mirrored or moulded—or should not travel...... at all! When should semantic and pragmatic elements in a text be replaced and by which other elements? The empirical basis of our work is marketing and technical texts in English, which travel into the Latvian and Danish markets, respectively....
Social representations of climate change
BOY, D.
2013-01-01
Each year since 2000, the French 'ADEME' (Agency for Environment and Energy Management) conducts a survey on the social representations of greenhouse effect and global warming. This survey is administered by telephone to a representative sample of the French population. The information gathered in the database can answer a series of basic questions concerning public perception in this area. What do the concepts of 'greenhouse effect' and 'global warming' mean for the public? To what extent do people think there is a consensus among scientists to explain these phenomena? Is responsibility for human action clearly established? What kind of solutions, based on public regulation or private initiative can help to remedy this situation? Finally, what were the major changes in public opinion over this 12 years period? (author)
Sparse Representations of Hyperspectral Images
Swanson, Robin J.
2015-01-01
Hyperspectral image data has long been an important tool for many areas of sci- ence. The addition of spectral data yields significant improvements in areas such as object and image classification, chemical and mineral composition detection, and astronomy. Traditional capture methods for hyperspectral data often require each wavelength to be captured individually, or by sacrificing spatial resolution. Recently there have been significant improvements in snapshot hyperspectral captures using, in particular, compressed sensing methods. As we move to a compressed sensing image formation model the need for strong image priors to shape our reconstruction, as well as sparse basis become more important. Here we compare several several methods for representing hyperspectral images including learned three dimensional dictionaries, sparse convolutional coding, and decomposable nonlocal tensor dictionaries. Addi- tionally, we further explore their parameter space to identify which parameters provide the most faithful and sparse representations.
Sparse Representations of Hyperspectral Images
Swanson, Robin J.
2015-11-23
Hyperspectral image data has long been an important tool for many areas of sci- ence. The addition of spectral data yields significant improvements in areas such as object and image classification, chemical and mineral composition detection, and astronomy. Traditional capture methods for hyperspectral data often require each wavelength to be captured individually, or by sacrificing spatial resolution. Recently there have been significant improvements in snapshot hyperspectral captures using, in particular, compressed sensing methods. As we move to a compressed sensing image formation model the need for strong image priors to shape our reconstruction, as well as sparse basis become more important. Here we compare several several methods for representing hyperspectral images including learned three dimensional dictionaries, sparse convolutional coding, and decomposable nonlocal tensor dictionaries. Addi- tionally, we further explore their parameter space to identify which parameters provide the most faithful and sparse representations.
Representations of affine Hecke algebras
Xi, Nanhua
1994-01-01
Kazhdan and Lusztig classified the simple modules of an affine Hecke algebra Hq (q E C*) provided that q is not a root of 1 (Invent. Math. 1987). Ginzburg had some very interesting work on affine Hecke algebras. Combining these results simple Hq-modules can be classified provided that the order of q is not too small. These Lecture Notes of N. Xi show that the classification of simple Hq-modules is essentially different from general cases when q is a root of 1 of certain orders. In addition the based rings of affine Weyl groups are shown to be of interest in understanding irreducible representations of affine Hecke algebras. Basic knowledge of abstract algebra is enough to read one third of the book. Some knowledge of K-theory, algebraic group, and Kazhdan-Lusztig cell of Cexeter group is useful for the rest
Standardization of beam line representations
Carey, David C.
1998-01-01
Standardization of beam line representations means that a single set of data can be used in many situations to represent a beam line. This set of data should be the same no matter what the program to be run or the calculation to be made. We have concerned ourselves with three types of standardization: (1) The same set of data should be usable by different programs. (2) The inclusion of other items in the data, such as calculations to be done, units to be used, or preliminary specifications, should be in a notation similar to the lattice specification. (3) A single set of data should be used to represent a given beam line, no matter what is being modified or calculated. The specifics of what is to be modified or calculated can be edited into the data as part of the calculation. These three requirements all have aspects not previously discussed in a public forum. Implementations into TRANSPORT will be discussed
Standardization of beam line representations
Carey, David C.
1999-01-01
Standardization of beam line representations means that a single set of data can be used in many situations to represent a beam line. This set of data should be the same no matter what the program to be run or the calculation to be made. We have concerned ourselves with three types of standardization: (1) The same set of data should be usable by different programs. (2) The inclusion of other items in the data, such as calculations to be done, units to be used, or preliminary specifications, should be in a notation similar to the lattice specification. (3) A single set of data should be used to represent a given beam line, no matter what is being modified or calculated. The specifics of what is to be modified or calculated can be edited into the data as part of the calculation. These three requirements all have aspects not previously discussed in a public forum. Implementations into TRANSPORT will be discussed
Style representation in design grammars
Ahmad, Sumbul; Chase, Scott Curland
2012-01-01
The concept of style is relevant for both the analysis and synthesis of designs. New styles are often formed by the adaptation of previous ones based on changes in design criteria and context. A formal characterization of style is given by shape grammars, which describe the compositional rules...... underlying a set of designs. Stylistic change can be modelled by grammar transformations, which allow the transformation of the structure and vocabulary of a grammar that is used to describe a particular style. In order for grammars to be useful beyond a single application, they should have the capability...... to be transformed according to changing design style needs. Issues of formalizing stylistic change necessitate a lucid and formal definition of style in the design language generated by a grammar. Furthermore, a significant aspect of the definition of style is the representation of aesthetic qualities attributed...
Representation theory of lattice current algebras
Alekseev, A.Yu.; Eidgenoessische Technische Hochschule, Zurich; Faddeev, L.D.; Froehlich, L.D.; Schomerus, V.; Kyoto Univ.
1996-04-01
Lattice current algebras were introduced as a regularization of the left-and right moving degrees of freedom in the WZNW model. They provide examples of lattice theories with a local quantum symmetry U q (G). Their representation theory is studied in detail. In particular, we construct all irreducible representations along with a lattice analogue of the fusion product for representations of the lattice current algebra. It is shown that for an arbitrary number of lattice sites, the representation categories of the lattice current algebras agree with their continuum counterparts. (orig.)
Local normality properties of some infrared representations
Doplicher, S.; Spera, M.
1983-01-01
We consider the positive energy representations of the algebra of quasilocal observables for the free massless Majorana field described in preceding papers. We show that by an appropriate choice of the (partially) occupied one particle modes we can find irreducible, type IIsub(infinite) or IIIsub(lambda) representations in this class which are unitarily equivalent to the vacuum representation when restricted to any forward light cone and disjoint from it when restricted to any backward light cone, or conversely. We give an elementary explicit proof of local normality of each representation in the above class. (orig.)
Distorted representation in visual tourism research
Jensen, Martin Trandberg
2016-01-01
how photographic materialities, performativities and sensations contribute to new tourism knowledges. While highlighting the potential of distorted representation, the paper posits a cautionary note in regards to the influential role of academic journals in determining the qualities of visual data....... The paper exemplifies distorted representation through three impressionistic tales derived from ethnographic research on the European rail travel phenomenon: interrail.......Tourism research has recently been informed by non-representational theories to highlight the socio-material, embodied and heterogeneous composition of tourist experiences. These advances have contributed to further reflexivity and called for novel ways to animate representations...
Joint sparse representation for robust multimodal biometrics recognition.
Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama
2014-01-01
Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.
Sparse Representation Denoising for Radar High Resolution Range Profiling
Min Li
2014-01-01
Full Text Available Radar high resolution range profile has attracted considerable attention in radar automatic target recognition. In practice, radar return is usually contaminated by noise, which results in profile distortion and recognition performance degradation. To deal with this problem, in this paper, a novel denoising method based on sparse representation is proposed to remove the Gaussian white additive noise. The return is sparsely described in the Fourier redundant dictionary and the denoising problem is described as a sparse representation model. Noise level of the return, which is crucial to the denoising performance but often unknown, is estimated by performing subspace method on the sliding subsequence correlation matrix. Sliding window process enables noise level estimation using only one observation sequence, not only guaranteeing estimation efficiency but also avoiding the influence of profile time-shift sensitivity. Experimental results show that the proposed method can effectively improve the signal-to-noise ratio of the return, leading to a high-quality profile.
Beatriz Diuk
2013-08-01
Full Text Available The aim of this study was to examine the relationships among vocabulary knowledge, phonological representations and phonological sensitivity in 80 Spanish-speaking preschool children from middle- and low-SES families. Significant social class differences were obtained on all tasks except syllable matching. Regression analyses were carried out to test the predictive power of vocabulary knowledge and accuracy of phonological representations on the phonological sensitivity measures. Receptive vocabulary predicted rhyme identification. Syllable matching was predicted by a task tapping accuracy of phonological representations. The fact that rhyme identification was predicted by vocabulary knowledge but syllable matching was predicted by a measure tapping accuracy of phonological representations in both groups suggests that early lexical development sets the stage for the development of the lower levels of phonological sensitivity but identification of smaller units requires more accurate and segmented phonological representations.
Wavelet representation of the nuclear dynamics
Jouault, B.; Sebille, F.; De La Mota, V.
1997-01-01
The study of the transport phenomena in nuclear matter is addressed in a new approach based on wavelet theory and the projection methods of statistical physics. The advantage of this framework is to optimize the representation spaces and the numerical treatment which gives the opportunity to enlarge the spectra of physical processes taken into account to preserve some important quantum information. At the same time this approach is more efficient than the usual solving schemes and mathematical formulations of the equations based on usual concepts. The application of this methodology to the the study of the physical phenomena related to the heavy ion collisions at intermediate energies has resulted in a model named DYWAN (DYnamical WAvelets in Nuclei). The results obtained with DYWAN for the central collisions in the system Ca + Ca at three different beam energies are reported. These are in agreement with the experimental results since a fusion process at 30 MeV is observed as well as a binary reaction at 50 MeV and kind of an explosion of the system at 90 MeV
Furman, Wyndol; Collibee, Charlene
2018-01-01
This study examined how representations of parent-child relationships, friendships, and past romantic relationships are related to subsequent romantic representations. Two-hundred 10th graders (100 female; M age = 15.87 years) from diverse neighborhoods in a Western U.S. city were administered questionnaires and were interviewed to assess avoidant and anxious representations of their relationships with parents, friends, and romantic partners. Participants then completed similar questionnaires and interviews about their romantic representations six more times over the next 7.5 years. Growth curve analyses revealed that representations of relationships with parents, friends, and romantic partners each uniquely predicted subsequent romantic representations across development. Consistent with attachment and behavioral systems theory, representations of romantic relationships are revised by representations and experiences in other relationships. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
Sergeev, Alexey; Herman, Michael F.
2006-01-01
The behavior of an initial value representation surface hopping wave function is examined. Since this method is an initial value representation for the semiclassical solution of the time independent Schroedinger equation for nonadiabatic problems, it has computational advantages over the primitive surface hopping wave function. The primitive wave function has been shown to provide transition probabilities that accurately compare with quantum results for model problems. The analysis presented in this work shows that the multistate initial value representation surface hopping wave function should approach the primitive result in asymptotic regions and provide transition probabilities with the same level of accuracy for scattering problems as the primitive method
LGBT Representations on Facebook : Representations of the Self and the Content
Chu, Yawen
2017-01-01
The topic of LGBT rights has been increasingly discussed and debated over recent years. More and more scholars show their interests in the field of LGBT representations in media. However, not many studies involved LGBT representations in social media. This paper explores LGBT representations on Facebook by analysing posts on an open page and in a private group, including both representations of the self as the identity of sexual minorities, content that is displayed on Facebook and the simila...
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
The first accurate description of an aurora
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Accurate Charge Densities from Powder Diffraction
Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob
Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...
Arbitrarily accurate twin composite π -pulse sequences
Torosov, Boyan T.; Vitanov, Nikolay V.
2018-04-01
We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .
Systematization of Accurate Discrete Optimization Methods
V. A. Ovchinnikov
2015-01-01
Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.
Improving Representational Competence with Concrete Models
Stieff, Mike; Scopelitis, Stephanie; Lira, Matthew E.; DeSutter, Dane
2016-01-01
Representational competence is a primary contributor to student learning in science, technology, engineering, and math (STEM) disciplines and an optimal target for instruction at all educational levels. We describe the design and implementation of a learning activity that uses concrete models to improve students' representational competence and…
Numerical Magnitude Representations Influence Arithmetic Learning
Booth, Julie L.; Siegler, Robert S.
2008-01-01
This study examined whether the quality of first graders' (mean age = 7.2 years) numerical magnitude representations is correlated with, predictive of, and causally related to their arithmetic learning. The children's pretest numerical magnitude representations were found to be correlated with their pretest arithmetic knowledge and to be…
Representations of the Magnitudes of Fractions
Schneider, Michael; Siegler, Robert S.
2010-01-01
We tested whether adults can use integrated, analog, magnitude representations to compare the values of fractions. The only previous study on this question concluded that even college students cannot form such representations and instead compare fraction magnitudes by representing numerators and denominators as separate whole numbers. However,…
Drawings as Representations of Children's Conceptions
Ehrlen, Karin
2009-01-01
Drawings are often used to obtain an idea of children's conceptions. Doing so takes for granted an unambiguous relation between conceptions and their representations in drawings. This study was undertaken to gain knowledge of the relation between children's conceptions and their representation of these conceptions in drawings. A theory of…
Dynamic representations on the interactive whiteboard
van der Meij, Hans; van der Meij, Jan; de Vries, Erica; Scheiter, Katharina
2012-01-01
In this study we assessed whether presenting dynamic representations on an IWB would lead to better learning gains compared to presenting static representations. Participants were 7-8 year old primary school children learning about views (N = 151) and the water cycle (N = 182). The results showed
An Axiomatic Representation of System Dynamics
Baianu, I
2004-01-01
An axiomatic representation of system dynamics is introduced in terms of categories, functors, organismal supercategories, limits and colimits of diagrams. Specific examples are considered in Complex Systems Biology, such as ribosome biogenesis and Hormonal Control in human subjects. "Fuzzy" Relational Structures are also proposed for flexible representations of biological system dynamics and organization.
Connectivity in the regular polytope representation
Thompson, R.J.; Van Oosterom, P.J.M.
2009-01-01
In order to be able to draw inferences about real world phenomena from a representation expressed in a digital computer, it is essential that the representation should have a rigorously correct algebraic structure. It is also desirable that the underlying algebra be familiar, and provide a close
A Distributional Representation Model For Collaborative Filtering
Junlin, Zhang; Heng, Cai; Tongwen, Huang; Huiping, Xue
2015-01-01
In this paper, we propose a very concise deep learning approach for collaborative filtering that jointly models distributional representation for users and items. The proposed framework obtains better performance when compared against current state-of-art algorithms and that made the distributional representation model a promising direction for further research in the collaborative filtering.
Freeform surface descriptions. Part I: Mathematical representations
Broemel, Anika; Lippmann, Uwe; Gross, Herbert
2017-10-01
Optical systems can benefit strongly from freeform surfaces; however, the choice of the right surface representation is not trivial and many aspects must be considered. In this work, we discuss the general approach classical globally defined representations, as well as the basic mathematics and properties of the most commonly used descriptions and present a new description developed by us for describing freeform surfaces.
Bridge: Intelligent Tutoring with Intermediate Representations
1988-05-01
Research and Development Center and Psychology Department University of Pittsburgh Pittsburgh, PA. 15260 The Artificial Intelligence and Psychology...problem never introduces more than one unfamiliar plan. Inteligent Tutoring With Intermediate Representations - Bonar and Cunniigbam 4 You must have a... Inteligent Tutoring With ntermediate Representations - Bonar and Cunningham 7 The requirements are specified at four differcnt levels, corresponding to
Phase space representations for spin23
Polubarinov, I.V.
1991-01-01
General properties of spin matrices and density ones are considered for any spin s. For spin 2 3 phase space representations are constructed. Representations, similar to the Bell one, for the correlator of projections of two spins 2 3 in the singlet state are found. Quantum analogs of the Bell inequality are obtained. 14 refs
Geometric Representations for Discrete Fourier Transforms
Cambell, C. W.
1986-01-01
Simple geometric representations show symmetry and periodicity of discrete Fourier transforms (DFT's). Help in visualizing requirements for storing and manipulating transform value in computations. Representations useful in any number of dimensions, but particularly in one-, two-, and three-dimensional cases often encountered in practice.
Klein Topological Field Theories from Group Representations
Sergey A. Loktev
2011-07-01
Full Text Available We show that any complex (respectively real representation of finite group naturally generates a open-closed (respectively Klein topological field theory over complex numbers. We relate the 1-point correlator for the projective plane in this theory with the Frobenius-Schur indicator on the representation. We relate any complex simple Klein TFT to a real division ring.
Minimal representations and Freudenthal triple systems
Olive, D.
2004-01-01
Unitary representations of noncompact Lie groups have long been sought in physics. The first nice concrete construction was found by Dirac in connection with the anti-de Sitter group. Some subsequent generalizations will be described, in particular the minimal representation thought to be relevant to realising duality in supergravity superstring theories. A relation to Freudenthal triple systems will be described. (author)
Parts, Cavities, and Object Representation in Infancy
Hayden, Angela; Bhatt, Ramesh S.; Kangas, Ashley; Zieber, Nicole
2011-01-01
Part representation is not only critical to object perception but also plays a key role in a number of basic visual cognition functions, such as figure-ground segregation, allocation of attention, and memory for shapes. Yet, virtually nothing is known about the development of part representation. If parts are fundamental components of object shape…
Constitutionalising the Right Legal Representation at CCMA ...
Recently, the issue of legal representation at internal disciplinary hearings and CCMA arbitrations has been a fervent topic of labour law discourse in South Africa. While the courts have consistently accepted the common law principle that there is no absolute right to legal representation at tribunals other than courts of law, ...
Student Teachers' Knowledge about Chemical Representations
Taskin, Vahide; Bernholt, Sascha; Parchmann, Ilka
2017-01-01
Chemical representations serve as a communication tool not only in exchanges between scientists but also in chemistry lessons. The goals of the present study were to measure the extent of student teachers' knowledge about chemical representations, focusing on chemical formulae and structures in particular, and to explore which factors related to…
Algebraic and analytic methods in representation theory
Schlichtkrull, Henrik
1996-01-01
This book is a compilation of several works from well-recognized figures in the field of Representation Theory. The presentation of the topic is unique in offering several different points of view, which should makethe book very useful to students and experts alike.Presents several different points of view on key topics in representation theory, from internationally known experts in the field
ON REGRESSION REPRESENTATIONS OF STOCHASTIC-PROCESSES
RUSCHENDORF, L; DEVALK, [No Value
We construct a.s. nonlinear regression representations of general stochastic processes (X(n))n is-an-element-of N. As a consequence we obtain in particular special regression representations of Markov chains and of certain m-dependent sequences. For m-dependent sequences we obtain a constructive
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys
How Accurately can we Calculate Thermal Systems?
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-01-01
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K eff , for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors
Chocholoušová, Jana; Feig, M.
2006-01-01
Roč. 27, č. 6 (2006), s. 719-729 ISSN 0192-8651 Keywords : molecular surface * generalized Born formalisms * molecular dynamic simulations Subject RIV: CC - Organic Chemistry Impact factor: 4.893, year: 2006
Fan Yang
2015-07-01
Full Text Available Normally, polarimetric SAR classification is a high-dimensional nonlinear mapping problem. In the realm of pattern recognition, sparse representation is a very efficacious and powerful approach. As classical descriptors of polarimetric SAR, covariance and coherency matrices are Hermitian semidefinite and form a Riemannian manifold. Conventional Euclidean metrics are not suitable for a Riemannian manifold, and hence, normal sparse representation classification cannot be applied to polarimetric SAR directly. This paper proposes a new land cover classification approach for polarimetric SAR. There are two principal novelties in this paper. First, a Stein kernel on a Riemannian manifold instead of Euclidean metrics, combined with sparse representation, is employed for polarimetric SAR land cover classification. This approach is named Stein-sparse representation-based classification (SRC. Second, using simultaneous sparse representation and reasonable assumptions of the correlation of representation among different frequency bands, Stein-SRC is generalized to simultaneous Stein-SRC for multi-frequency polarimetric SAR classification. These classifiers are assessed using polarimetric SAR images from the Airborne Synthetic Aperture Radar (AIRSAR sensor of the Jet Propulsion Laboratory (JPL and the Electromagnetics Institute Synthetic Aperture Radar (EMISAR sensor of the Technical University of Denmark (DTU. Experiments on single-band and multi-band data both show that these approaches acquire more accurate classification results in comparison to many conventional and advanced classifiers.
Exploration of solids based on representation systems
Publio Suárez Sotomonte
2011-01-01
Full Text Available This article refers to some of the findings of a research project implemented as a teaching strategy to generate environments for the learning of platonic and archimedean solids, with a group of eighth grade students. This strategy was based on the meaningful learning approach and on the use of representation systems using the ontosemiotic approach in mathematical education, as a framework for the construction of mathematical concepts. This geometry teaching strategy adopts the stages of exploration, representation-modeling, formal construction and study of applications. It uses concrete, physical and tangible materials for origami, die making, and structures for the construction of threedimensional solids considered external tangible solid representation systems, as well as computer based educational tools to design dynamic geometry environments as intangible external representation systems.These strategies support both the imagination and internal systems of representation, fundamental to the comprehension of geometry concepts.
Operator representation for effective realistic interactions
Weber, Dennis; Feldmeier, Hans; Neff, Thomas [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany)
2013-07-01
We present a method to derive an operator representation from the partial wave matrix elements of effective realistic nucleon-nucleon potentials. This method allows to employ modern effective interactions, which are mostly given in matrix element representation, also in nuclear many-body methods requiring explicitly the operator representation, for example ''Fermionic Molecular Dynamics'' (FMD). We present results for the operator representation of effective interactions obtained from the Argonne V18 potential with the Uenitary Correlation Operator Method'' (UCOM) and the ''Similarity Renormalization Group'' (SRG). Moreover, the operator representation allows a better insight in the nonlocal structure of the potential: While the UCOM transformed potential only shows a quadratic momentum dependence, the momentum dependence of SRG transformed potentials is beyond such a simple polynomial form.
ABJM Wilson loops in arbitrary representations
Hatsuda, Yasuyuki [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Tokyo Institute of Technology (Japan). Dept. of Physics; Honda, Masazumi [High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki (Japan); Moriyama, Sanefumi [Nagoya Univ. (Japan). Kobayashi Maskawa Inst. and Graduate School of Mathematics; Okuyama, Kazumi [Shinshu Univ., Matsumoto, Nagano (Japan). Dept. of Physics
2013-06-15
We study vacuum expectation values (VEVs) of circular half BPS Wilson loops in arbitrary representations in ABJM theory. We find that those in hook representations are reduced to elementary integrations thanks to the Fermi gas formalism, which are accessible from the numerical studies similar to the partition function in the previous studies. For non-hook representations, we show that the VEVs in the grand canonical formalism can be exactly expressed as determinants of those in the hook representations. Using these facts, we can study the instanton effects of the VEVs in various representations. Our results are consistent with the worldsheet instanton effects studied from the topological string and a prescription to include the membrane instanton effects by shifting the chemical potential, which has been successful for the partition function.
ABJM Wilson loops in arbitrary representations
Hatsuda, Yasuyuki; Moriyama, Sanefumi; Okuyama, Kazumi
2013-06-01
We study vacuum expectation values (VEVs) of circular half BPS Wilson loops in arbitrary representations in ABJM theory. We find that those in hook representations are reduced to elementary integrations thanks to the Fermi gas formalism, which are accessible from the numerical studies similar to the partition function in the previous studies. For non-hook representations, we show that the VEVs in the grand canonical formalism can be exactly expressed as determinants of those in the hook representations. Using these facts, we can study the instanton effects of the VEVs in various representations. Our results are consistent with the worldsheet instanton effects studied from the topological string and a prescription to include the membrane instanton effects by shifting the chemical potential, which has been successful for the partition function.
Locally analytic vectors in representations of locally
Emerton, Matthew J
2017-01-01
The goal of this memoir is to provide the foundations for the locally analytic representation theory that is required in three of the author's other papers on this topic. In the course of writing those papers the author found it useful to adopt a particular point of view on locally analytic representation theory: namely, regarding a locally analytic representation as being the inductive limit of its subspaces of analytic vectors (of various "radii of analyticity"). The author uses the analysis of these subspaces as one of the basic tools in his study of such representations. Thus in this memoir he presents a development of locally analytic representation theory built around this point of view. The author has made a deliberate effort to keep the exposition reasonably self-contained and hopes that this will be of some benefit to the reader.
When data representation compromise data security
Simonsen, Eivind Ortind; Dahl, Mads Ronald
WHEN DATA REPRESENTATION COMPROMISE DATA SECURITY The workflow of transforming data into informative representations makes extensive usage of computers and software. Scientists have a conventional tradition for producing publications that include tables and graphs as data representations....... These representations can be used for multiple purposes such as publications in journals, teaching and conference material. But when created, stored and distributed in a digital form there is a risk of compromising data security. Data beyond the once used specifically to create the representation can be included...... on the internet over many years? A new legislation proposed in 2012 by the European Commission on protection of personal data will be implemented from 2015. The new law will impose sanction options ranging from a warning to a fine up to 100.000.000 EUR. We argue that this new law will lead to especially...
Haverd, Vanessa; Cuntz, Matthias; Nieradzik, Lars P.; Harman, Ian N.
2016-09-01
CABLE is a global land surface model, which has been used extensively in offline and coupled simulations. While CABLE performs well in comparison with other land surface models, results are impacted by decoupling of transpiration and photosynthesis fluxes under drying soil conditions, often leading to implausibly high water use efficiencies. Here, we present a solution to this problem, ensuring that modelled transpiration is always consistent with modelled photosynthesis, while introducing a parsimonious single-parameter drought response function which is coupled to root water uptake. We further improve CABLE's simulation of coupled soil-canopy processes by introducing an alternative hydrology model with a physically accurate representation of coupled energy and water fluxes at the soil-air interface, including a more realistic formulation of transfer under atmospherically stable conditions within the canopy and in the presence of leaf litter. The effects of these model developments are assessed using data from 18 stations from the global eddy covariance FLUXNET database, selected to span a large climatic range. Marked improvements are demonstrated, with root mean squared errors for monthly latent heat fluxes and water use efficiencies being reduced by 40 %. Results highlight the important roles of deep soil moisture in mediating drought response and litter in dampening soil evaporation.
Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.
Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi
2017-09-22
DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.
Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods
Kaiyang Qu
2017-09-01
Full Text Available DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF, is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.
Multi-representation based on scientific investigation for enhancing students’ representation skills
Siswanto, J.; Susantini, E.; Jatmiko, B.
2018-03-01
This research aims to implementation learning physics with multi-representation based on the scientific investigation for enhancing students’ representation skills, especially on the magnetic field subject. The research design is one group pretest-posttest. This research was conducted in the department of mathematics education, Universitas PGRI Semarang, with the sample is students of class 2F who take basic physics courses. The data were obtained by representation skills test and documentation of multi-representation worksheet. The Results show gain analysis value of .64 which means some medium improvements. The result of t-test (α = .05) is shows p-value = .001. This learning significantly improves students representation skills.
Mental Representations in Art Discourse
Katja Sudec
2014-03-01
Full Text Available The paper starts by examining the content included in the museum environment, where I write about the type of relations that emerge in a museum or artistic setting. This is followed by an observation of a social act (socialising and a chapter on the use of food in an artistic venue. At the end, I address art education via the format that I developed at the 6th Berlin Biennale. This is followed by an overview of the cognitive model of the fort-da game based on Freud’s theory via two discourse models. Here, I address discourse on art works in the form of a lecture or reading, where the art space is fictitiously present, and then move on to discuss discourse on art works in real, “present” art space. This is followed by a section on actions (Handlungen in German and methods supporting the fort-da model. The last part of the article examines the issue of “mental representations”, defining and explaining the function of mental representations with regard to the target audience of the blind and visually impaired.
Chemical thermodynamic representations of and
Besmann, T.M.; Lindemer, T.B.
1985-01-01
All available oxygen potential-temperature-composition data for the calcium fluorite-structure sup(**) phase were retrieved from the literature and utilized in the development of a binary solid solution representation of the phase. The data and phase relations are found to be best described by a solution of [Pusub(4/3)O 2 ] and [PuO 2 ] with a temperature dependent interaction energy. The fluorite-structure is assumed to be represented by a combination of the binaries and , and thus treated as a solution of [Pusub(4/3)O 2 ], [PuO 2 ], [UO 2 ], and either [U 2 Osub(4.5)] or [U 3 O 7 ]. The resulting equations well reproduce the large amount of oxygen potential-temperature-composition data for the mixed oxide system, all of which were also retrieved from the literature. These models are the first that appear to display the appropriate oxygen potential-temperature-composition and phase relation behavior over the entire range of existence for the phases. (orig.)
The SPECIES and ORGANISMS Resources for Fast and Accurate Identification of Taxonomic Names in Text
Pafilis, Evangelos; Pletscher-Frankild, Sune; Fanini, Lucia
2013-01-01
The exponential growth of the biomedical literature is making the need for efficient, accurate text-mining tools increasingly clear. The identification of named biological entities in text is a central and difficult task. We have developed an efficient algorithm and implementation of a dictionary......-based approach to named entity recognition, which we here use to identify names of species and other taxa in text. The tool, SPECIES, is more than an order of magnitude faster and as accurate as existing tools. The precision and recall was assessed both on an existing gold-standard corpus and on a new corpus...
Accurate frequency measurements on gyrotrons using a ''gyro-radiometer''
Rebuffi, L.
1986-08-01
Using an heterodyne system, called ''Gyro-radiometer'', accurated frequency measurements have been carried out on VARIAN 60 GHz gyrotrons. Changing the principal tuning parameters of a gyrotron, we have detected frequency variations up to 100 MHz, ∼ 40 MHz frequency jumps and smaller jumps (∼ 10 MHz) when mismatches in the transmission line were present. FWHM bandwidth of 300 KHz, parasitic frequencies and frequency drift during 100 msec pulses have also been observed. An efficient method to find a stable-, high power-, long pulse-working point of a gyrotron loaded by a transmission line, has been derived. In general, for any power value it is possible to find stable working conditions tuning the principal parameters of the tube in correspondance of a maximum of the emitted frequency
An Integrative Approach to Accurate Vehicle Logo Detection
Hao Pan
2013-01-01
required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.
2010-01-01
After a speech of the CEA's (Commissariat a l'Energie Atomique) general administrator about energy efficiency as a first rank challenge for the planet and for France, this publications proposes several contributions: a discussion of the efficiency of nuclear energy, an economic analysis of R and D's value in the field of fourth generation fast reactors, discussions about biofuels and the relationship between energy efficiency and economic competitiveness, and a discussion about solar photovoltaic efficiency
The representations of Lie groups and geometric quantizations
Zhao Qiang
1998-01-01
In this paper we discuss the relation between representations of Lie groups and geometric quantizations. A series of representations of Lie groups are constructed by geometric quantization of coadjoint orbits. Particularly, all representations of compact Lie groups, holomorphic discrete series of representations and spherical representations of reductive Lie groups are constructed by geometric quantizations of elliptic and hyperbolic coadjoint orbits. (orig.)
Funnel metadynamics as accurate binding free-energy method
Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele
2013-01-01
A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839
Ferrie, Christopher; Emerson, Joseph
2008-01-01
Several finite-dimensional quasi-probability representations of quantum states have been proposed to study various problems in quantum information theory and quantum foundations. These representations are often defined only on restricted dimensions and their physical significance in contexts such as drawing quantum-classical comparisons is limited by the non-uniqueness of the particular representation. Here we show how the mathematical theory of frames provides a unified formalism which accommodates all known quasi-probability representations of finite-dimensional quantum systems. Moreover, we show that any quasi-probability representation is equivalent to a frame representation and then prove that any such representation of quantum mechanics must exhibit either negativity or a deformed probability calculus. (fast track communication)
An accurate and portable solid state neutron rem meter
Oakes, T.M. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Bellinger, S.L. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Miller, W.H. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Missouri University Research Reactor, Columbia, MO (United States); Myers, E.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Fronk, R.G.; Cooper, B.W [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Sobering, T.J. [Electronics Design Laboratory, Kansas State University, KS (United States); Scott, P.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Ugorowski, P.; McGregor, D.S; Shultis, J.K. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Caruso, A.N., E-mail: carusoan@umkc.edu [Department of Physics, University of Missouri, Kansas City, MO (United States)
2013-08-11
Accurately resolving the ambient neutron dose equivalent spanning the thermal to 15 MeV energy range with a single configuration and lightweight instrument is desirable. This paper presents the design of a portable, high intrinsic efficiency, and accurate neutron rem meter whose energy-dependent response is electronically adjusted to a chosen neutron dose equivalent standard. The instrument may be classified as a moderating type neutron spectrometer, based on an adaptation to the classical Bonner sphere and position sensitive long counter, which, simultaneously counts thermalized neutrons by high thermal efficiency solid state neutron detectors. The use of multiple detectors and moderator arranged along an axis of symmetry (e.g., long axis of a cylinder) with known neutron-slowing properties allows for the construction of a linear combination of responses that approximate the ambient neutron dose equivalent. Variations on the detector configuration are investigated via Monte Carlo N-Particle simulations to minimize the total instrument mass while maintaining acceptable response accuracy—a dose error less than 15% for bare {sup 252}Cf, bare AmBe, an epi-thermal and mixed monoenergetic sources is found at less than 4.5 kg moderator mass in all studied cases. A comparison of the energy dependent dose equivalent response and resultant energy dependent dose equivalent error of the present dosimeter to commercially-available portable rem meters and the prior art are presented. Finally, the present design is assessed by comparison of the simulated output resulting from applications of several known neutron sources and dose rates.
Bayesian calibration of power plant models for accurate performance prediction
Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der
2014-01-01
Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions
Accurate predictions for the LHC made easy
CERN. Geneva
2014-01-01
The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Naserpour, Mahin; Zapata-Rodríguez, Carlos J.
2018-01-01
Highlights: • Paraxial beams are represented in a series expansion in terms of Bessel wave functions. • The coefficients of the series expansion can be analytically determined by using the pattern in the focal plane. • In particular, Gaussian beams and apertured wave fields have been critically examined. • This representation of the wave field is adequate for scattering problems with shaped beams. - Abstract: The evaluation of vector wave fields can be accurately performed by means of diffraction integrals, differential equations and also series expansions. In this paper, a Bessel series expansion which basis relies on the exact solution of the Helmholtz equation in cylindrical coordinates is theoretically developed for the straightforward yet accurate description of low-numerical-aperture focal waves. The validity of this approach is confirmed by explicit application to Gaussian beams and apertured focused fields in the paraxial regime. Finally we discuss how our procedure can be favorably implemented in scattering problems.
Spatially variant morphological restoration and skeleton representation.
Bouaynaya, Nidhal; Charif-Chefchaouni, Mohammed; Schonfeld, Dan
2006-11-01
The theory of spatially variant (SV) mathematical morphology is used to extend and analyze two important image processing applications: morphological image restoration and skeleton representation of binary images. For morphological image restoration, we propose the SV alternating sequential filters and SV median filters. We establish the relation of SV median filters to the basic SV morphological operators (i.e., SV erosions and SV dilations). For skeleton representation, we present a general framework for the SV morphological skeleton representation of binary images. We study the properties of the SV morphological skeleton representation and derive conditions for its invertibility. We also develop an algorithm for the implementation of the SV morphological skeleton representation of binary images. The latter algorithm is based on the optimal construction of the SV structuring element mapping designed to minimize the cardinality of the SV morphological skeleton representation. Experimental results show the dramatic improvement in the performance of the SV morphological restoration and SV morphological skeleton representation algorithms in comparison to their translation-invariant counterparts.
Interactions between visual working memory representations.
Bae, Gi-Yeul; Luck, Steven J
2017-11-01
We investigated whether the representations of different objects are maintained independently in working memory or interact with each other. Observers were shown two sequentially presented orientations and required to reproduce each orientation after a delay. The sequential presentation minimized perceptual interactions so that we could isolate interactions between memory representations per se. We found that similar orientations were repelled from each other whereas dissimilar orientations were attracted to each other. In addition, when one of the items was given greater attentional priority by means of a cue, the representation of the high-priority item was not influenced very much by the orientation of the low-priority item, but the representation of the low-priority item was strongly influenced by the orientation of the high-priority item. This indicates that attention modulates the interactions between working memory representations. In addition, errors in the reported orientations of the two objects were positively correlated under some conditions, suggesting that representations of distinct objects may become grouped together in memory. Together, these results demonstrate that working-memory representations are not independent but instead interact with each other in a manner that depends on attentional priority.
General regression and representation model for classification.
Jianjun Qian
Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.
W. Dong
2016-06-01
Full Text Available Despite the now-ubiquitous two-dimensional (2D maps, photorealistic three-dimensional (3D representations of cities (e.g., Google Earth have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users’ eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.
Artistic Representation with Pulsed Holography
Ishii, S
2013-01-01
This thesis describes artistic representation through pulsed holography. One of the prevalent practical problems in making holograms is object movement. Any movement of the object or film, including movement caused by acoustic vibration, has the same fatal results. One way of reducing the chance of movement is by ensuring that the exposure is very quick; using a pulsed laser can fulfill this objective. The attractiveness of using pulsed laser is based on the variety of materials or objects that can be recorded (e.g., liquid material or instantaneous scene of a moving object). One of the most interesting points about pulsed holograms is that some reconstructed images present us with completely different views of the real world. For example, the holographic image of liquid material does not appear fluid; it looks like a piece of hard glass that would produce a sharp sound upon tapping. In everyday life, we are unfamiliar with such an instantaneous scene. On the other hand, soft-textured materials such as a feather or wool differ from liquids when observed through holography. Using a pulsed hologram, we can sense the soft touch of the object or material with the help of realistic three-dimensional (3-D) images. The images allow us to realize the sense of touch in a way that resembles touching real objects. I had the opportunity to use a pulsed ruby laser soon after I started to work in the field of holography in 1979. Since then, I have made pulsed holograms of activities, including pouring water, breaking eggs, blowing soap bubbles, and scattering feathers and popcorn. I have also created holographic art with materials and objects, such as silk fiber, fabric, balloons, glass, flowers, and even the human body. Whenever I create art, I like to present the spectator with a new experience in perception. Therefore, I would like to introduce my experimental artwork through those pulsed holograms.
Artificial limb representation in amputees.
van den Heiligenberg, Fiona M Z; Orlov, Tanya; Macdonald, Scott N; Duff, Eugene P; Henderson Slater, David; Beckmann, Christian F; Johansen-Berg, Heidi; Culham, Jody C; Makin, Tamar R
2018-05-01
The human brain contains multiple hand-selective areas, in both the sensorimotor and visual systems. Could our brain repurpose neural resources, originally developed for supporting hand function, to represent and control artificial limbs? We studied individuals with congenital or acquired hand-loss (hereafter one-handers) using functional MRI. We show that the more one-handers use an artificial limb (prosthesis) in their everyday life, the stronger visual hand-selective areas in the lateral occipitotemporal cortex respond to prosthesis images. This was found even when one-handers were presented with images of active prostheses that share the functionality of the hand but not necessarily its visual features (e.g. a 'hook' prosthesis). Further, we show that daily prosthesis usage determines large-scale inter-network communication across hand-selective areas. This was demonstrated by increased resting state functional connectivity between visual and sensorimotor hand-selective areas, proportional to the intensiveness of everyday prosthesis usage. Further analysis revealed a 3-fold coupling between prosthesis activity, visuomotor connectivity and usage, suggesting a possible role for the motor system in shaping use-dependent representation in visual hand-selective areas, and/or vice versa. Moreover, able-bodied control participants who routinely observe prosthesis usage (albeit less intensively than the prosthesis users) showed significantly weaker associations between degree of prosthesis observation and visual cortex activity or connectivity. Together, our findings suggest that altered daily motor behaviour facilitates prosthesis-related visual processing and shapes communication across hand-selective areas. This neurophysiological substrate for prosthesis embodiment may inspire rehabilitation approaches to improve usage of existing substitutionary devices and aid implementation of future assistive and augmentative technologies.
Charlet, J; Darmoni, S J
2015-08-13
To summarize the best papers in the field of Knowledge Representation and Management (KRM). A comprehensive review of medical informatics literature was performed to select some of the most interesting papers of KRM published in 2014. Four articles were selected, two focused on annotation and information retrieval using an ontology. The two others focused mainly on ontologies, one dealing with the usage of a temporal ontology in order to analyze the content of narrative document, one describing a methodology for building multilingual ontologies. Semantic models began to show their efficiency, coupled with annotation tools.
Understanding as Integration of Heterogeneous Representations
Martínez, Sergio F.
2014-03-01
The search for understanding is a major aim of science. Traditionally, understanding has been undervalued in the philosophy of science because of its psychological underpinnings; nowadays, however, it is widely recognized that epistemology cannot be divorced from psychology as sharp as traditional epistemology required. This eliminates the main obstacle to give scientific understanding due attention in philosophy of science. My aim in this paper is to describe an account of scientific understanding as an emergent feature of our mastering of different (causal) explanatory frameworks that takes place through the mastering of scientific practices. Different practices lead to different kinds of representations. Such representations are often heterogeneous. The integration of such representations constitute understanding.