WorldWideScience

Sample records for Lloyd-Max Quantiser Shannon Limit Source Coding Uniform Quantiser

  1. Uniform and Non-Uniform Optimum Scalar Quantizers Performances: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Fendy Santoso

    2008-05-01

    Full Text Available The aim of this research is to investigate source coding, the representation of information source output by finite R bits/symbol. The performance of optimum quantisers subject to an entropy constraint has been studied. The definitive work in this area is best summarised by Shannon’s source coding theorem, that is, a source with entropy H can be encoded with arbitrarily small error probability at any rate R (bits/source output as long as R>H. Conversely, If R the error probability will be driven away from zero, independent of the complexity of the encoder and the decoder employed. In this context, the main objective of engineers is however to design the optimum code. Unfortunately, the rate-distortion theorem does not provide the recipe for such a design. The theorem does, however, provide the theoretical limit so that we know how close we are to the optimum. The full understanding of the theorem also helps in setting the direction to achieve such an optimum. In this research, we have investigated the performances of two practical scalar quantisers, i.e., a Lloyd-Max quantiser and the uniformly defined one and also a well-known entropy coding scheme, i.e., Huffman coding against their theoretically attainable optimum performance due to Shannon’s limit R. It has been shown that our uniformly defined quantiser could demonstrate superior performance. The performance improvements, in fact, are more noticeable at higher bit rates.

  2. Hybrid 3D Fractal Coding with Neighbourhood Vector Quantisation

    Directory of Open Access Journals (Sweden)

    Zhen Yao

    2004-12-01

    Full Text Available A hybrid 3D compression scheme which combines fractal coding with neighbourhood vector quantisation for video and volume data is reported. While fractal coding exploits the redundancy present in different scales, neighbourhood vector quantisation, as a generalisation of translational motion compensation, is a useful method for removing both intra- and inter-frame coherences. The hybrid coder outperforms most of the fractal coders published to date while the algorithm complexity is kept relatively low.

  3. Stochastic quantisation: theme and variation

    International Nuclear Information System (INIS)

    Klauder, J.R.; Kyoto Univ.

    1987-01-01

    The paper on stochastic quantisation is a contribution to the book commemorating the sixtieth birthday of E.S. Fradkin. Stochastic quantisation reformulates Euclidean quantum field theory in the language of Langevin equations. The generalised free field is discussed from the viewpoint of stochastic quantisation. An artificial family of highly singular model theories wherein the space-time derivatives are dropped altogether is also examined. Finally a modified form of stochastic quantisation is considered. (U.K.)

  4. Are the gravitational waves quantised?

    International Nuclear Information System (INIS)

    Lovas, Istvan

    1997-01-01

    If the gravitational waves are classical objects then the value of their correlation function is 1. If they are quantised, then there exist two possibilities: the gravitational waves are either completely coherent, then their correlation function is again 1, or they are only partially coherent, then their correlation function is expected to deviate from 1. Unfortunately such a deviation is not a sufficient proof for the quantised character of the gravitational waves. If the gravitational waves are quantised and generated by the change of the background metrical then they can be in a squeezed state. In a squeezed state there is a chance for the correlation between the phase of the wave and the quantum fluctuations. The observation of such a correlation would be a genuine proof of the quantised character of the gravitational wave

  5. BRST Quantisation of Histories Electrodynamics

    OpenAIRE

    Noltingk, D.

    2001-01-01

    This paper is a continuation of earlier work where a classical history theory of pure electrodynamics was developed in which the the history fields have \\emph{five} components. The extra component is associated with an extra constraint, thus enlarging the gauge group of histories electrodynamics. In this paper we quantise the classical theory developed previously by two methods. Firstly we quantise the reduced classical history space, to obtain a reduced quantum history theory. Secondly we qu...

  6. Understanding the Quantum Computational Speed-up via De-quantisation

    Directory of Open Access Journals (Sweden)

    Cristian S. Calude

    2010-06-01

    Full Text Available While it seems possible that quantum computers may allow for algorithms offering a computational speed-up over classical algorithms for some problems, the issue is poorly understood. We explore this computational speed-up by investigating the ability to de-quantise quantum algorithms into classical simulations of the algorithms which are as efficient in both time and space as the original quantum algorithms. The process of de-quantisation helps formulate conditions to determine if a quantum algorithm provides a real speed-up over classical algorithms. These conditions can be used to develop new quantum algorithms more effectively (by avoiding features that could allow the algorithm to be efficiently classically simulated, as well as providing the potential to create new classical algorithms (by using features which have proved valuable for quantum algorithms. Results on many different methods of de-quantisations are presented, as well as a general formal definition of de-quantisation. De-quantisations employing higher-dimensional classical bits, as well as those using matrix-simulations, put emphasis on entanglement in quantum algorithms; a key result is that any algorithm in which the entanglement is bounded is de-quantisable. These methods are contrasted with the stabiliser formalism de-quantisations due to the Gottesman-Knill Theorem, as well as those which take advantage of the topology of the circuit for a quantum algorithm. The benefits of the different methods are contrasted, and the importance of a range of techniques is emphasised. We further discuss some features of quantum algorithms which current de-quantisation methods do not cover.

  7. The quantisation and measurement of momentum observables

    International Nuclear Information System (INIS)

    Wan, K.K.; McFarlane, K.

    1980-01-01

    Mackey's scheme for the quantisation of classical momenta generating complete vector fields (complete momenta) is introduced, the differential operators corresponding to these momenta are introduced and discussed, and an isomorphism is shown to exist between the subclass of first-order self-adjoint differential operators, whose symmetric restrictions are essentially self-adjoint, and the complete classical momenta. Difficulties in the quantisation of incomplete momenta are discussed, and a critique given. Finally, in an attempt to relate the concept of completeness to measurability concepts of classical and quantum global measurability are introduced, and shown to require completeness. These results afford strong physical insight into the nature of complete momenta, and leads us to suggest a quantisability condition based upon global measurability. (author)

  8. Testing quantised inertia on emdrives with dielectrics

    Science.gov (United States)

    McCulloch, M. E.

    2017-05-01

    Truncated-cone-shaped cavities with microwaves resonating within them (emdrives) move slightly towards their narrow ends, in contradiction to standard physics. This effect has been predicted by a model called quantised inertia (MiHsC) which assumes that the inertia of the microwaves is caused by Unruh radiation, more of which is allowed at the wide end. Therefore, photons going towards the wide end gain inertia, and to conserve momentum the cavity must move towards its narrow end, as observed. A previous analysis with quantised inertia predicted a controversial photon acceleration, which is shown here to be unnecessary. The previous analysis also mispredicted the thrust in those emdrives with dielectrics. It is shown here that having a dielectric at one end of the cavity is equivalent to widening the cavity at that end, and when dielectrics are considered, then quantised inertia predicts these results as well as the others, except for Shawyer's first test where the thrust is predicted to be the right size but in the wrong direction. As a further test, quantised inertia predicts that an emdrive's thrust can be enhanced by using a dielectric at the wide end.

  9. Projective flatness in the quantisation of bosons and fermions

    Science.gov (United States)

    Wu, Siye

    2015-07-01

    We compare the quantisation of linear systems of bosons and fermions. We recall the appearance of projectively flat connection and results on parallel transport in the quantisation of bosons. We then discuss pre-quantisation and quantisation of fermions using the calculus of fermionic variables. We define a natural connection on the bundle of Hilbert spaces and show that it is projectively flat. This identifies, up to a phase, equivalent spinor representations constructed by various polarisations. We introduce the concept of metaplectic correction for fermions and show that the bundle of corrected Hilbert spaces is naturally flat. We then show that the parallel transport in the bundle of Hilbert spaces along a geodesic is a rescaled projection provided that the geodesic lies within the complement of a cut locus. Finally, we study the bundle of Hilbert spaces when there is a symmetry.

  10. Are the gravitational waves quantised?

    International Nuclear Information System (INIS)

    Lovas, I.

    1998-01-01

    The question whether gravitational waves are quantised or not can be investigated by the help of correlation measurements. If the gravitational waves are classical objects then the value of their correlation function is 1. However, if they are quantised, then there exist two possibilities: the gravitational waves are either completely coherent, then the correlation function is again 1, or they are partially coherent, then the correlation function is expected to deviate from 1. If the gravitational waves are generated by the change of the background metrics then they can be in a squeezed state. In a squeezed state there is a chance for the correlation between the phase of the wave and the quantum fluctuations. (author)

  11. Alternative to dead reckoning for model state quantisation when migrating to a quantised discrete

    CSIR Research Space (South Africa)

    Duvenhage, A

    2008-06-01

    Full Text Available Some progress has recently been made on migrating an existing distributed parallel discrete time simulator to a quantised discrete event architecture. The migration is done to increase the scale of the real-time simulations supported...

  12. Self-organised fractional quantisation in a hole quantum wire

    Science.gov (United States)

    Gul, Y.; Holmes, S. N.; Myronov, M.; Kumar, S.; Pepper, M.

    2018-03-01

    We have investigated hole transport in quantum wires formed by electrostatic confinement in strained germanium two-dimensional layers. The ballistic conductance characteristics show the regular staircase of quantum levels with plateaux at n2e 2/h, where n is an integer, e is the fundamental unit of charge and h is Planck’s constant. However as the carrier concentration is reduced, the quantised levels show a behaviour that is indicative of the formation of a zig-zag structure and new quantised plateaux appear at low temperatures. In units of 2e 2/h the new quantised levels correspond to values of n  =  1/4 reducing to 1/8 in the presence of a strong parallel magnetic field which lifts the spin degeneracy but does not quantise the wavefunction. A further plateau is observed corresponding to n  =  1/32 which does not change in the presence of a parallel magnetic field. These values indicate that the system is behaving as if charge was fractionalised with values e/2 and e/4, possible mechanisms are discussed.

  13. Global stabilisation of large-scale hydraulic networks with quantised and positive proportional controls

    DEFF Research Database (Denmark)

    Jensen, Tom Nørgaard; Wisniewski, Rafal

    2013-01-01

    a set of decentralised, logarithmic quantised and constrained control actions with properly designed quantisation parameters. That is, an attractor set with a compact basin of attraction exists. Subsequently, the basin can be increased by increasing the control gains. In our work, this result...... is extended by showing that an attractor set with a global basin of attraction exists for arbitrary values of positive control gains, given that the upper level of the quantiser is properly designed. Furthermore, the proof is given for general monotone quantisation maps. Since the basin of attraction...

  14. Gauge symmetries, topology, and quantisation

    International Nuclear Information System (INIS)

    Balachandran, A.P.

    1994-01-01

    The following two loosely connected sets of topics are reviewed in these lecture notes: (1) Gauge invariance, its treatment in field theories and its implications for internal symmetries and edge states such as those in the quantum Hall effect. (2) Quantisation on multiply connected spaces and a topological proof the spin-statistics theorem which avoids quantum field theory and relativity. Under (1), after explaining the meaning of gauge invariance and the theory of constraints, we discuss boundary conditions on gauge transformations and the definition of internal symmetries in gauge field theories. We then show how the edge states in the quantum Hall effect can be derived from the Chern-Simons action using the preceding ideas. Under (2), after explaining the significance of fibre bundles for quantum physics, we review quantisation on multiply connected spaces in detail, explaining also mathematical ideas such as those of the universal covering space and the fundamental group. These ideas are then used to prove the aforementioned topological spin-statistics theorem

  15. Do the SuperKamiokande atmospheric neutrino results explain electric charge quantisation?

    International Nuclear Information System (INIS)

    Foot, R.; Volkas, R.R.

    1998-08-01

    It is shown that the SuperKamiokande atmospheric neutrino results explain electric charge quantisation, provided that the oscillation mode is ν μ → ν τ and that the neutrino mass is of the Majorana type. It is emphasised that neutrino oscillation and neutrinoless double beta decay experiments provide important information regarding the seemingly unrelated issue of electric charge quantisation

  16. Quantisation deforms w∞ to W∞ gravity

    NARCIS (Netherlands)

    Bergshoeff, E.; Howe, P.S.; Pope, C.N.; Sezgin, E.; Shen, X.; Stelle, K.S.

    1991-01-01

    Quantising a classical theory of w∞ gravity requires the introduction of an infinite number of counterterms in order to remove matter-dependent anomalies. We show that these counterterms correspond precisely to a renormalisation of the classical w∞ currents to quantum W∞ currents.

  17. Quantisation deforms w∞ to W∞ gravity

    International Nuclear Information System (INIS)

    Bergshoeff, E.; Howe, P.S.; State Univ. of New York, Stony Brook, NY; Pope, C.N.; Sezgin, E.; Shen, X.; Stelle, K.S.

    1991-01-01

    Quantising a classical theory of w ∞ gravity requires the introduction of an infinite number of counterterms in order to remove matter-dependent anomalies. We show that these counterterms correspond precisely to a renormalisation of the classical w ∞ currents to quantum w ∞ currents. (orig.)

  18. Twistor quantisation of open strings in three dimensions

    International Nuclear Information System (INIS)

    Shaw, W.T.

    1987-01-01

    The paper treats the first quantisation of loops in real four-dimensional twistor space. Such loops correspond to open strings in three-dimensional spacetime. The geometry and reality structures pertaining to twistors in three dimensions are reviewed and the twistor description of null geodesics is presented as a prototype for the discussion of null curves. The classical twistor structure of null curves is then described. The symplectic structure is exhibited and used to investigate the constraint algebra. Expressions for the momentum operators are found. The symplectic structure defines natural canonical variables for covariant twistor quantisation and the consequences of carrying this out are described. A twistor representation of the Virasoro algebra with central charge 2 is found and some solutions of the quantum constraints are exhibited. (author)

  19. Stochastic Automata for Outdoor Semantic Mapping using Optimised Signal Quantisation

    DEFF Research Database (Denmark)

    Caponetti, Fabio; Blas, Morten Rufus; Blanke, Mogens

    2011-01-01

    Autonomous robots require many types of information to obtain intelligent and safe behaviours. For outdoor operations, semantic mapping is essential and this paper proposes a stochastic automaton to localise the robot within the semantic map. For correct modelling and classi¯cation under...... uncertainty, this paper suggests quantising robotic perceptual features, according to a probabilistic description, and then optimising the quantisation. The proposed method is compared with other state-of-the-art techniques that can assess the con¯dence of their classi¯cation. Data recorded on an autonomous...

  20. Exact quantisation of the relativistic Hopfield model

    Energy Technology Data Exchange (ETDEWEB)

    Belgiorno, F., E-mail: francesco.belgiorno@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo 32, IT-20133 Milano (Italy); INdAM-GNFM (Italy); Cacciatori, S.L., E-mail: sergio.cacciatori@uninsubria.it [Department of Science and High Technology, Università dell’Insubria, Via Valleggio 11, IT-22100 Como (Italy); INFN sezione di Milano, via Celoria 16, IT-20133 Milano (Italy); Dalla Piazza, F., E-mail: f.dallapiazza@gmail.com [Università “La Sapienza”, Dipartimento di Matematica, Piazzale A. Moro 2, I-00185, Roma (Italy); Doronzo, M., E-mail: m.doronzo@uninsubria.it [Department of Science and High Technology, Università dell’Insubria, Via Valleggio 11, IT-22100 Como (Italy)

    2016-11-15

    We investigate the quantisation in the Heisenberg representation of a relativistically covariant version of the Hopfield model for dielectric media, which entails the interaction of the quantum electromagnetic field with the matter dipole fields, represented by a mesoscopic polarisation field. A full quantisation of the model is provided in a covariant gauge, with the aim of maintaining explicit relativistic covariance. Breaking of the Lorentz invariance due to the intrinsic presence in the model of a preferred reference frame is also taken into account. Relativistic covariance forces us to deal with the unphysical (scalar and longitudinal) components of the fields, furthermore it introduces, in a more tricky form, the well-known dipole ghost of standard QED in a covariant gauge. In order to correctly dispose of this contribution, we implement a generalised Lautrup trick. Furthermore, causality and the relation of the model with the Wightman axioms are also discussed.

  1. The Dirac quantisation condition for fluxes on four-manifolds

    International Nuclear Information System (INIS)

    Alvarez, M.; Olive, D.I.

    2000-01-01

    A systematic treatment is given of the Dirac quantisation condition for electromagnetic fluxes through two-cycles on a four-manifold space-time which can be very complicated topologically, provided only that it is connected, compact, oriented and smooth. This is sufficient for the quantised Maxwell theory on it to satisfy electromagnetic duality properties. The results depend upon whether the complex wave function needed for the argument is scalar or spinorial in nature. An essential step is the derivation of a ''quantum Stokes' theorem'' for the integral of the gauge potential around a closed loop on the manifold. This can only be done for an exponentiated version of the line integral (the ''Wilson loop'') and the result again depends on the nature of the complex wave functions, through the appearance of what is known as a Stiefel-Whitney cohomology class in the spinor case. A nice picture emerges providing a physical interpretation, in terms of quantised fluxes and wave-functions, of mathematical concepts such as spin structures, spin C structures, the Stiefel-Whitney class and Wu's formula. Relations appear between these, electromagnetic duality and the Atiyah-Singer index theorem. Possible generalisation to higher dimensions of space-time in the presence of branes are mentioned. (orig.)

  2. Factors Influencing Energy Quantisation | Adelabu | Global Journal ...

    African Journals Online (AJOL)

    Department of Physics, College of Science & Agriculture, University of Abuja, P. M. B. 117, Abuja FCT, Nigeria. Investigations of energy quantisation in a range of multiple quantum well (MQW) systems using effective mass band structure calculations including non-parabolicity in both the well and barrier layers are reported.

  3. Quantisation of super Teichmueller theory

    International Nuclear Information System (INIS)

    Aghaei, Nezhla; Hamburg Univ.; Pawelkiewicz, Michal; Techner, Joerg

    2015-12-01

    We construct a quantisation of the Teichmueller spaces of super Riemann surfaces using coordinates associated to ideal triangulations of super Riemann surfaces. A new feature is the non-trivial dependence on the choice of a spin structure which can be encoded combinatorially in a certain refinement of the ideal triangulation. By constructing a projective unitary representation of the groupoid of changes of refined ideal triangulations we demonstrate that the dependence of the resulting quantum theory on the choice of a triangulation is inessential.

  4. On the quantisation of one-dimensional bags

    International Nuclear Information System (INIS)

    Fairley, G.T.; Squires, E.J.

    1976-01-01

    The quantisation of one-dimensional MIT bags by expanding the fields as a sum of classical modes and truncating the series after the first term is discussed. The lowest states of a bag in a world containing two scalar quark fields are obtained. Problems associated with the zero-point oscillations of the field are discussed. (Auth.)

  5. Refined algebraic quantisation in a system with nonconstant gauge invariant structure functions

    International Nuclear Information System (INIS)

    Martínez-Pascual, Eric

    2013-01-01

    In a previous work [J. Louko and E. Martínez-Pascual, “Constraint rescaling in refined algebraic quantisation: Momentum constraint,” J. Math. Phys. 52, 123504 (2011)], refined algebraic quantisation (RAQ) within a family of classically equivalent constrained Hamiltonian systems that are related to each other by rescaling one momentum-type constraint was investigated. In the present work, the first steps to generalise this analysis to cases where more constraints occur are developed. The system under consideration contains two momentum-type constraints, originally abelian, where rescalings of these constraints by a non-vanishing function of the coordinates are allowed. These rescalings induce structure functions at the level of the gauge algebra. Providing a specific parametrised family of real-valued scaling functions, the implementation of the corresponding rescaled quantum momentum-type constraints is performed using RAQ when the gauge algebra: (i) remains abelian and (ii) undergoes into an algebra of a nonunimodular group with nonconstant gauge invariant structure functions. Case (ii) becomes the first example known to the author where an open algebra is handled in refined algebraic quantisation. Challenging issues that arise in the presence of non-gauge invariant structure functions are also addressed

  6. 3D Model Retrieval Based on Vector Quantisation Index Histograms

    International Nuclear Information System (INIS)

    Lu, Z M; Luo, H; Pan, J S

    2006-01-01

    This paper proposes a novel technique to retrieval 3D mesh models using vector quantisation index histograms. Firstly, points are sampled uniformly on mesh surface. Secondly, to a point five features representing global and local properties are extracted. Thus feature vectors of points are obtained. Third, we select several models from each class, and employ their feature vectors as a training set. After training using LBG algorithm, a public codebook is constructed. Next, codeword index histograms of the query model and those in database are computed. The last step is to compute the distance between histograms of the query and those of the models in database. Experimental results show the effectiveness of our method

  7. Formal Series of Generalised Functions and Their Application to Deformation Quantisation

    OpenAIRE

    Tosiek, Jaromir

    2016-01-01

    Foundations of the formal series $*$ -- calculus in deformation quantisation are discussed. Several classes of continuous linear functionals over algebras applied in classical and quantum physics are introduced. The notion of positivity in formal series calculus is proposed. Problems with defining quantum states over the set of formal series are analysed.

  8. Sp(2) covariant quantisation of general gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez-Bello, J L

    1994-11-01

    The Sp(2) covariant quantization of gauge theories is studied. The geometrical interpretation of gauge theories in terms of quasi principal fibre bundles Q(M{sub s}, G{sub s}) is reviewed. It is then described the Sp(2) algebra of ordinary Yang-Mills theory. A consistent formulation of covariant Lagrangian quantisation for general gauge theories based on Sp(2) BRST symmetry is established. The original N = 1, ten dimensional superparticle is considered as an example of infinitely reducible gauge algebras, and given explicitly its Sp(2) BRST invariant action. (author). 18 refs.

  9. Sp(2) covariant quantisation of general gauge theories

    International Nuclear Information System (INIS)

    Vazquez-Bello, J.L.

    1994-11-01

    The Sp(2) covariant quantization of gauge theories is studied. The geometrical interpretation of gauge theories in terms of quasi principal fibre bundles Q(M s , G s ) is reviewed. It is then described the Sp(2) algebra of ordinary Yang-Mills theory. A consistent formulation of covariant Lagrangian quantisation for general gauge theories based on Sp(2) BRST symmetry is established. The original N = 1, ten dimensional superparticle is considered as an example of infinitely reducible gauge algebras, and given explicitly its Sp(2) BRST invariant action. (author). 18 refs

  10. Time-space noncommutativity: quantised evolutions

    International Nuclear Information System (INIS)

    Balachandran, Aiyalam P.; Govindarajan, Thupil R.; Teotonio-Sobrinho, Paulo; Martins, Andrey Gomes

    2004-01-01

    In previous work, we developed quantum physics on the Moyal plane with time-space noncommutativity, basing ourselves on the work of Doplicher et al. Here we extend it to certain noncommutative versions of the cylinder, R 3 and Rx S 3 . In all these models, only discrete time translations are possible, a result known before in the first two cases. One striking consequence of quantised time translations is that even though a time independent hamiltonian is an observable, in scattering processes, it is conserved only modulo 2π/θ, where θ is the noncommutative parameter. (In contrast, on a one-dimensional periodic lattice of lattice spacing a and length L = Na, only momentum mod 2π/L is observable (and can be conserved).) Suggestions for further study of this effect are made. Scattering theory is formulated and an approach to quantum field theory is outlined. (author)

  11. On the relation between reduced quantisation and quantum reduction for spherical symmetry in loop quantum gravity

    International Nuclear Information System (INIS)

    Bodendorfer, N; Zipfel, A

    2016-01-01

    Building on a recent proposal for a quantum reduction to spherical symmetry from full loop quantum gravity, we investigate the relation between a quantisation of spherically symmetric general relativity and a reduction at the quantum level. To this end, we generalise the previously proposed quantum reduction by dropping the gauge fixing condition on the radial diffeomorphisms, thus allowing us to make direct contact with previous work on reduced quantisation. A dictionary between spherically symmetric variables and observables with respect to the reduction constraints in the full theory is discussed, as well as an embedding of reduced quantum states to a subsector of the quantum symmetry reduced full theory states. On this full theory subsector, the quantum algebra of the mentioned observables is computed and shown to qualitatively reproduce the quantum algebra of the reduced variables in the large quantum number limit for a specific choice of regularisation. Insufficiencies in recovering the reduced algebra quantitatively from the full theory are attributed to the oversimplified full theory quantum states we use. (paper)

  12. Quantisation of the holographic Ricci dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Albarran, Imanol; Bouhmadi-López, Mariam, E-mail: imanol@ubi.pt, E-mail: mbl@ubi.pt [Departamento de Física, Universidade da Beira Interior, 6200 Covilhã (Portugal)

    2015-08-01

    While general relativity is an extremely robust theory to describe the gravitational interaction in our Universe, it is expected to fail close to singularities like the cosmological ones. On the other hand, it is well known that some dark energy models might induce future singularities; this can be the case for example within the setup of the Holographic Ricci Dark Energy model (HRDE). On this work, we perform a cosmological quantisation of the HRDE model and obtain under which conditions a cosmic doomsday can be avoided within the quantum realm. We show as well that this quantum model not only avoid future singularities but also the past Big Bang.

  13. Φ -Ψ model for electrodynamics in dielectric media: exact quantisation in the Heisenberg representation

    Energy Technology Data Exchange (ETDEWEB)

    Belgiorno, Francesco [Politecnico di Milano, Dipartimento di Matematica, Milano (Italy); INdAM-GNFM, Milano (Italy); Cacciatori, Sergio L. [Universita dell' Insubria, Department of Science and High Technology, Como (Italy); INFN sezione di Milano, Milano (Italy); Dalla Piazza, Francesco [Universita ' ' La Sapienza' ' , Dipartimento di Matematica, Roma (Italy); Doronzo, Michele [Universita dell' Insubria, Department of Science and High Technology, Como (Italy)

    2016-06-15

    We investigate the quantisation in the Heisenberg representation of a model which represents a simplification of the Hopfield model for dielectric media, where the electromagnetic field is replaced by a scalar field φ and the role of the polarisation field is played by a further scalar field ψ. The model, which is quadratic in the fields, is still characterised by a non-trivial physical content, as the physical particles correspond to the polaritons of the standard Hopfield model of condensed matter physics. Causality is also taken into account and a discussion of the standard interaction representation is also considered. (orig.)

  14. Quantum gravity in three dimensions, Witten spinors and the quantisation of length

    Science.gov (United States)

    Wieland, Wolfgang

    2018-05-01

    In this paper, I investigate the quantisation of length in euclidean quantum gravity in three dimensions. The starting point is the classical hamiltonian formalism in a cylinder of finite radius. At this finite boundary, a counter term is introduced that couples the gravitational field in the interior to a two-dimensional conformal field theory for an SU (2) boundary spinor, whose norm determines the conformal factor between the fiducial boundary metric and the physical metric in the bulk. The equations of motion for this boundary spinor are derived from the boundary action and turn out to be the two-dimensional analogue of the Witten equations appearing in Witten's proof of the positive mass theorem. The paper concludes with some comments on the resulting quantum theory. It is shown, in particular, that the length of a one-dimensional cross section of the boundary turns into a number operator on the Fock space of the theory. The spectrum of this operator is discrete and matches the results from loop quantum gravity in the spin network representation.

  15. Electric charge quantisation from gauge invariance of a Lagrangian: a catalogue of baryon number violating scalar interactions

    International Nuclear Information System (INIS)

    Bowes, J.P.; Foot, R.; Volkas, R.R.

    1997-01-01

    In gauge theories like the standard model, the electric charges of the fermions can be heavily constrained from the classical structure of the theory and from the cancellation of anomalies. There is however mounting evidence suggesting that these anomaly constraints are not as well motivated as the classical constraints. In light of this, possible modifications of the minimal standard model are discussed which will give a complete electric charge quantisation from classical constraints alone. Because these modifications to the Standard Model involve the consideration of baryon number violating scalar interactions, a complete catalogue of the simplest ways to modify the Standard Model is presented so as to introduce explicit baryon number violation. 9 refs., 7 figs

  16. The principle of the indistinguishability of identical particles and the Lie algebraic approach to the field quantisation

    International Nuclear Information System (INIS)

    Govorkov, A.B.

    1980-01-01

    The density matrix, rather than the wavefunction describing the system of a fixed number of non-relativistic identical particles, is subject to the second quantisation. Here the bilinear operators which move a particle from a given state to another appear and satisfy the Lie algebraic relations of the unitary group SU(rho) when the dimension rho→infinity. The drawing into consideration of the system with a variable number of particles implies the extension of this algebra into one of the simple Lie algebras of classical (orthogonal, symplectic or unitary) groups in the even-dimensional spaces. These Lie algebras correspond to the para-Fermi-, para-Bose- and para-uniquantisation of fields, respectively. (author)

  17. Electric charge quantisation from gauge invariance of a Lagrangian: a catalogue of baryon number violating scalar interactions

    Energy Technology Data Exchange (ETDEWEB)

    Bowes, J.P.; Foot, R.; Volkas, R.R.

    1997-06-01

    In gauge theories like the standard model, the electric charges of the fermions can be heavily constrained from the classical structure of the theory and from the cancellation of anomalies. There is however mounting evidence suggesting that these anomaly constraints are not as well motivated as the classical constraints. In light of this, possible modifications of the minimal standard model are discussed which will give a complete electric charge quantisation from classical constraints alone. Because these modifications to the Standard Model involve the consideration of baryon number violating scalar interactions, a complete catalogue of the simplest ways to modify the Standard Model is presented so as to introduce explicit baryon number violation. 9 refs., 7 figs.

  18. State estimation for networked control systems using fixed data rates

    Science.gov (United States)

    Liu, Qing-Quan; Jin, Fang

    2017-07-01

    This paper investigates state estimation for linear time-invariant systems where sensors and controllers are geographically separated and connected via a bandwidth-limited and errorless communication channel with the fixed data rate. All plant states are quantised, coded and converted together into a codeword in our quantisation and coding scheme. We present necessary and sufficient conditions on the fixed data rate for observability of such systems, and further develop the data-rate theorem. It is shown in our results that there exists a quantisation and coding scheme to ensure observability of the system if the fixed data rate is larger than the lower bound given, which is less conservative than the one in the literature. Furthermore, we also examine the role that the disturbances have on the state estimation problem in the case with data-rate limitations. Illustrative examples are given to demonstrate the effectiveness of the proposed method.

  19. Canonical quantisation via conditional symmetries of the closed FLRW model coupled to a scalar field

    International Nuclear Information System (INIS)

    Zampeli, Adamantia

    2015-01-01

    We study the classical, quantum and semiclassical solutions of a Robertson-Walker spacetime coupled to a massless scalar field. The Lagrangian of these minisuperspace models is singular and the application of the theory of Noether symmetries is modified to include the conditional symmetries of the corresponding (weakly vanishing) Hamiltonian. These are found to be the simultaneous symmetries of the supermetric and the superpotential. The quantisation is performed adopting the Dirac proposal for constrained systems. The innovation in the approach we use is that the integrals of motion related to the conditional symmetries are promoted to operators together with the Hamiltonian and momentum constraints. These additional conditions imposed on the wave function render the system integrable and it is possible to obtain solutions of the Wheeler-DeWitt equation. Finally, we use the wave function to perform a semiclassical analysis following Bohm and make contact with the classical solution. The analysis starts with a modified Hamilton-Jacobi equation from which the semiclassical momenta are defined. The solutions of the semiclassical equations are then studied and compared to the classical ones in order to understand the nature and behaviour of the classical singularities. (paper)

  20. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  1. Networked control of discrete-time linear systems over lossy digital communication channels

    Science.gov (United States)

    Jin, Fang; Zhao, Guang-Rong; Liu, Qing-Quan

    2013-12-01

    This article addresses networked control problems for linear time-invariant systems. The insertion of the digital communication network inevitably leads to packet dropout, time delay and quantisation error. Due to data rate limitations, quantisation error is not neglected. In particular, the case where the sensors and controllers are geographically separated and connected via noisy, bandwidth-limited digital communication channels is considered. A fundamental limitation on the data rate of the channel for mean-square stabilisation of the closed-loop system is established. Sufficient conditions for mean-square stabilisation are derived. It is shown that there exists a quantisation, coding and control scheme to stabilise the unstable system over packet dropout communication channels if the data rate is larger than the lower bound proposed in our result. An illustrative example is given to demonstrate the effectiveness of the proposed conditions.

  2. Brief review of Planck law

    International Nuclear Information System (INIS)

    Zamora Carranza, M.

    2001-01-01

    This paper presents a brief review of the scientific events which led to the determination of the law of radiation and the quantisation of energy by Max Planck. From the separation of sunlight by Newton to the reasons which led Planck to quantised the energy of an oscillator. I discuss the theoretical and experimental difficulties which scientists overcame to derive the law of heat radiation. (Author) 6 refs

  3. A new method for reducing DNL in nuclear ADCs using an interpolation technique

    International Nuclear Information System (INIS)

    Vaidya, P.P.; Gopalakrishnan, K.R.; Pethe, V.A.; Anjaneyulu, T.

    1986-01-01

    The paper describes a new method for reducing the DNL associated with nuclear ADCs. The method named the ''interpolation technique'' is utilized to derive the quantisation steps corresponding to the last n bits of the digital code by dividing quantisation steps due to higher significant bits of the DAC, using a chain of resistors. Using comparators, these quantisation steps are compared with the analog voltage to be digitized, which is applied as a voltage shift at both ends of this chain. The output states of the comparators define the n bit code. The errors due to offset voltages and bias currents of the comparators are statistically neutralized by changing the polarity of quantisation steps as well as the polarity of analog voltage (corresponding to last n bits) for alternate A/D conversion. The effect of averaging on the channel profile can be minimized. A 12 bit ADC was constructured using this technique which gives DNL of less than +-1% over most of the channels for conversion time of nearly 4.5 μs. Gatti's sliding scale technique can be implemented for further reduction of DNL. The interpolation technique has a promising potential of improving the resolution of existing 12 bit ADCs to 16 bit, without degrading the percentage DNL significantly. (orig.)

  4. Regeneration limit of classical Shannon capacity

    Science.gov (United States)

    Sorokina, M. A.; Turitsyn, S. K.

    2014-05-01

    Since Shannon derived the seminal formula for the capacity of the additive linear white Gaussian noise channel, it has commonly been interpreted as the ultimate limit of error-free information transmission rate. However, the capacity above the corresponding linear channel limit can be achieved when noise is suppressed using nonlinear elements; that is, the regenerative function not available in linear systems. Regeneration is a fundamental concept that extends from biology to optical communications. All-optical regeneration of coherent signal has attracted particular attention. Surprisingly, the quantitative impact of regeneration on the Shannon capacity has remained unstudied. Here we propose a new method of designing regenerative transmission systems with capacity that is higher than the corresponding linear channel, and illustrate it by proposing application of the Fourier transform for efficient regeneration of multilevel multidimensional signals. The regenerative Shannon limit—the upper bound of regeneration efficiency—is derived.

  5. Random amino acid mutations and protein misfolding lead to Shannon limit in sequence-structure communication.

    Directory of Open Access Journals (Sweden)

    Andreas Martin Lisewski

    2008-09-01

    Full Text Available The transmission of genomic information from coding sequence to protein structure during protein synthesis is subject to stochastic errors. To analyze transmission limits in the presence of spurious errors, Shannon's noisy channel theorem is applied to a communication channel between amino acid sequences and their structures established from a large-scale statistical analysis of protein atomic coordinates. While Shannon's theorem confirms that in close to native conformations information is transmitted with limited error probability, additional random errors in sequence (amino acid substitutions and in structure (structural defects trigger a decrease in communication capacity toward a Shannon limit at 0.010 bits per amino acid symbol at which communication breaks down. In several controls, simulated error rates above a critical threshold and models of unfolded structures always produce capacities below this limiting value. Thus an essential biological system can be realistically modeled as a digital communication channel that is (a sensitive to random errors and (b restricted by a Shannon error limit. This forms a novel basis for predictions consistent with observed rates of defective ribosomal products during protein synthesis, and with the estimated excess of mutual information in protein contact potentials.

  6. Performance Estimation for Lowpass Ternary Filters

    Directory of Open Access Journals (Sweden)

    Brenton Steele

    2003-11-01

    Full Text Available Ternary filters have tap values limited to −1, 0, or +1. This restriction in tap values greatly simplifies the multipliers required by the filter, making ternary filters very well suited to hardware implementations. Because they incorporate coarse quantisation, their performance is typically limited by tap quantisation error. This paper derives formulae for estimating the achievable performance of lowpass ternary filters, thereby allowing the number of computationally intensive design iterations to be reduced. Motivated by practical communications systems requirements, the performance measure which is used is the worst-case stopband attenuation.

  7. Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies

    Directory of Open Access Journals (Sweden)

    Aakanshi Gupta

    2018-05-01

    Full Text Available The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy. By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE. The values of model performance statistics ( R 2 , adjusted R 2 , Mean Square Error (MSE and standard error also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers.

  8. Box-counting dimension revisited: presenting an efficient method of minimising quantisation error and an assessment of the self-similarity of structural root systems

    Directory of Open Access Journals (Sweden)

    Martin eBouda

    2016-02-01

    Full Text Available Fractal dimension (FD, estimated by box-counting, is a metric used to characterise plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantisation error (QE, which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterise the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitised in 3D and subjected to box-counts. A pattern search algorithm was used to minimise QE by optimising grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates.QE due to both grid position and orientation was a significant source of error in FD estimates, but pattern search provided an efficient means of minimising it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitisations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did

  9. The precision of textural analysis in {sup 18}F-FDG-PET scans of oesophageal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Doumou, Georgia; Siddique, Musib [King' s College London, Division of Imaging Sciences and Biomedical Engineering, London (United Kingdom); Tsoumpas, Charalampos [King' s College London, Division of Imaging Sciences and Biomedical Engineering, London (United Kingdom); University of Leeds, The Division of Medical Physics, Leeds (United Kingdom); Goh, Vicky [King' s College London, Division of Imaging Sciences and Biomedical Engineering, London (United Kingdom); Guy' s and St Thomas' Hospitals NHS Foundation Trust, Radiology Department, London (United Kingdom); Cook, Gary J. [King' s College London, Division of Imaging Sciences and Biomedical Engineering, London (United Kingdom); Guy' s and St Thomas' Hospitals NHS Foundation Trust, The PET Centre, London (United Kingdom); University of Leeds, The Division of Medical Physics, Leeds (United Kingdom); St Thomas' Hospital, Clinical PET Centre, Division of Imaging Sciences and Biomedical Engineering, Kings College London, London (United Kingdom)

    2015-09-15

    Measuring tumour heterogeneity by textural analysis in {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) provides predictive and prognostic information but technical aspects of image processing can influence parameter measurements. We therefore tested effects of image smoothing, segmentation and quantisation on the precision of heterogeneity measurements. Sixty-four {sup 18}F-FDG PET/CT images of oesophageal cancer were processed using different Gaussian smoothing levels (2.0, 2.5, 3.0, 3.5, 4.0 mm), maximum standardised uptake value (SUV{sub max}) segmentation thresholds (45 %, 50 %, 55 %, 60 %) and quantisation (8, 16, 32, 64, 128 bin widths). Heterogeneity parameters included grey-level co-occurrence matrix (GLCM), grey-level run length matrix (GLRL), neighbourhood grey-tone difference matrix (NGTDM), grey-level size zone matrix (GLSZM) and fractal analysis methods. The concordance correlation coefficient (CCC) for the three processing variables was calculated for each heterogeneity parameter. Most parameters showed poor agreement between different bin widths (CCC median 0.08, range 0.004-0.99). Segmentation and smoothing showed smaller effects on precision (segmentation: CCC median 0.82, range 0.33-0.97; smoothing: CCC median 0.99, range 0.58-0.99). Smoothing and segmentation have only a small effect on the precision of heterogeneity measurements in {sup 18}F-FDG PET data. However, quantisation often has larger effects, highlighting a need for further evaluation and standardisation of parameters for multicentre studies. (orig.)

  10. Photoelectron antibunching and absorber theory

    International Nuclear Information System (INIS)

    Pegg, D.T.

    1980-01-01

    The recently detected photoelectron antibunching effect is considered to be evidence for the quantised electromagnetic field, i.e. for the existence of photons. Direct-action quantum absorber theory, on the other hand, has been developed on the basis that the quantised field is illusory, with quantisation being required only for atoms. In this paper it is shown that photoelectron antibunching is readily explicable in terms of absorber theory and in fact is directly attributable to the quantum nature of the emitting and detecting atoms alone. The physical nature of the reduction of the wavepacket associated with the detection process is briefly discussed in terms of absorber theory. (author)

  11. African Journal of Science and Technology (AJST) SUPERVISED ...

    African Journals Online (AJOL)

    NORBERT OPIYO AKECH

    Keywords: color image, kohonen, LVQ, classification, K-means. INTRODUCTION. In this paper the problem of color image quantisation is discussed. Color quantisation consists of two steps: tem- plate design, in which a reduced number of template col- ors (typically 8-256) is specified, and pixel mapping in which each color ...

  12. Introduction to quantum groups

    International Nuclear Information System (INIS)

    Sudbery, A.

    1996-01-01

    These pedagogical lectures contain some motivation for the study of quantum groups; a definition of ''quasi triangular Hopf algebra'' with explanations of all the concepts required to build it up; descriptions of quantised universal enveloping algebras and the quantum double; and an account of quantised function algebras and the action of quantum groups on quantum spaces. (author)

  13. Examination of the 0.7(2e2/h) feature in the quantised conduction of a quantum point contact: varying the effective g-factor with hydrostatic pressure

    International Nuclear Information System (INIS)

    Wirtz, R; Taylor, R.P.; Newbury, R.; Nicholls, J.T.; Tribe, W.R.; Simmons, M.Y.

    1999-01-01

    Full text: The conductance of a quasi one-dimensional channel defined by a split-gate quantum point contact (QPC) on the surface of a AlGaAs/GaAs heterostructure shows quantised steps at n(2e 2 /h) where n is an integer. This experimental result is due to the reduction of the number of current carrying one-dimensional subbands caused by narrowing the QPC. The theoretical explanation however does not take electron-electron interactions into account. Recently Thomas et al. discovered a new feature at non-integral value of n ∼ 0.7 in very low-disorder samples (μ ∼ 450 m 2 V -1 s -1 ) which may originate from electron-electron interactions (e.g. spin polarisation at zero magnetic field). We are currently investigating the 0.7 feature as a function of applied hydrostatic pressure. Hydrostatic pressure affects the band structure and therefore the effective mass and the effective g-factor. In the case of bulk GaAs hydrostatic pressure reduces the magnitude of the effective g-factor, reaching a value of zero at approximately 1.7x10 9 Pa. Using a non-magnetic BeCu clamp-cell we achieve pressures up to 1 x 10 9 Pa, reducing the effective g-factor by more than 60%, in a temperature range 30mK to 300K and at magnetic fields up to 17T. We are therefore able to map the 0.7 feature as a function of p,T and B to assess the evidence for an electron-electron interaction driven origin of the 0.7 feature. We will present the preliminary results of our measurements

  14. Quantum X waves with orbital angular momentum in nonlinear dispersive media

    Science.gov (United States)

    Ornigotti, Marco; Conti, Claudio; Szameit, Alexander

    2018-06-01

    We present a complete and consistent quantum theory of generalised X waves with orbital angular momentum in dispersive media. We show that the resulting quantised light pulses are affected by neither dispersion nor diffraction and are therefore resilient against external perturbations. The nonlinear interaction of quantised X waves in quadratic and Kerr nonlinear media is also presented and studied in detail.

  15. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  16. EOG feature relevance determination for microsleep detection

    OpenAIRE

    Golz Martin; Wollner Sebastian; Sommer David; Schnieder Sebastian

    2017-01-01

    Automatic relevance determination (ARD) was applied to two-channel EOG recordings for microsleep event (MSE) recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC) and logarithmic power spectral densities (PSD) averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ) was used as ARD...

  17. Point and counterpoint between Mathematical Physics and Physical Mathematics

    Energy Technology Data Exchange (ETDEWEB)

    Leach, P G L; Nucci, M C, E-mail: leach@ucy.ac.c, E-mail: leachp@ukzn.ac.z, E-mail: leach@math.aegean.g, E-mail: nucci@unipg.i [Dipartimento di Matematica e Informatica, Universita di Perugia, 06123 Perugia (Italy)

    2010-06-01

    In recent years there has been a resurgence of interest in problems dating back for over half a century. In particular we refer to the questions of the consistency of quantisation and nonlinear canonical transformations and the quantisation of higher-order field theories. We present resolutions to these questions based upon considerations of symmetry. This enables one to examine these problems within the context of existing theory without the need to introduce new and exotic theories.

  18. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  19. The significance of classical structures in quantum theories

    International Nuclear Information System (INIS)

    Lowe, M.J.

    1978-09-01

    The implications for the quantum theory of the presence of non-linear classical solutions of the equations of motion are investigated in various model systems under the headings: (1) Canonical quantisation of the soliton in lambdaphi 4 theory in two dimensions. (2) Bound for soliton masses in two dimensional field theories. (3) The canonical quantisation of a soliton like solution in the non-linear schrodinger equation. (4) The significance of the instanton classical solution in a quantum mechanical system. (U.K.)

  20. Effect of low transverse magnetic field on the confinement strength in a quasi-1D wire

    International Nuclear Information System (INIS)

    Kumar, Sanjeev; Thomas, K. J.; Smith, L. W.; Farrer, I.; Ritchie, D. A.; Jones, G. A. C.; Griffiths, J.; Pepper, M.

    2013-01-01

    Transport measurements in a quasi-one dimensional (1D) quantum wire are reported in the presence of low transverse magnetic field. Differential conductance shows weak quantised plateaus when the 2D electrons are squeezed electrostatically. Application of a small transverse magnetic field (0.2T) enhances the overall degree of quantisation due to the formation of magneto-electric subbands. The results show the role of magnetic field to fine tune the confinement strength in low density wires when interaction gives rise to double row formation

  1. Quantisation of monotonic twist maps

    International Nuclear Information System (INIS)

    Boasman, P.A.; Smilansky, U.

    1993-08-01

    Using an approach suggested by Moser, classical Hamiltonians are generated that provide an interpolating flow to the stroboscopic motion of maps with a monotonic twist condition. The quantum properties of these Hamiltonians are then studied in analogy with recent work on the semiclassical quantization of systems based on Poincare surfaces of section. For the generalized standard map, the correspondence with the usual classical and quantum results is shown, and the advantages of the quantum Moser Hamiltonian demonstrated. The same approach is then applied to the free motion of a particle on a 2-torus, and to the circle billiard. A natural quantization condition based on the eigenphases of the unitary time--development operator is applied, leaving the exact eigenvalues of the torus, but only the semiclassical eigenvalues for the billiard; an explanation for this failure is proposed. It is also seen how iterating the classical map commutes with the quantization. (authors)

  2. Construction of quantised Higgs-like fields in two dimensions

    International Nuclear Information System (INIS)

    Albeverio, S.; Hoeegh-Krohn, R.; Holden, H.; Kolsrud, T.

    1989-01-01

    A mathematical construction of Higgs-like fields in two dimensions is presented, including passage to the continuum and infinite volume limits. In the limit, a quantum field theory obeying the Osterwalder-Schrader axioms is obtained. The method is based on representing the Schwinger functions in terms of stochastic multiplicative curve integrals and brownian bridges. (orig.)

  3. Translational invariance in bag model

    International Nuclear Information System (INIS)

    Megahed, F.

    1981-10-01

    In this thesis, the effect of restoring the translational invariance to an approximation to the MIT bag model on the calculation of deep inelastic structure functions is investigated. In chapter one, the model and its major problems are reviewed and Dirac's method of quantisation is outlined. This method is used in chapter two to quantise a two-dimensional complex scalar bag and formal expressions for the form factor and the structure functions are obtained. In chapter three, the expression for the structure function away from the Bjorken limit is studied. The corrections to the L 0 - approximation to the structure function is calculated in chapter four and it is shown to be large. Finally, in chapter five, a bag-like model for kinematic corrections to structure functions is introduced and agreement with data between 2 and 6 (GeV/C) 2 is obtained. (author)

  4. Modern Canonical Quantum General Relativity

    Science.gov (United States)

    Thiemann, Thomas

    2008-11-01

    Preface; Notation and conventions; Introduction; Part I. Classical Foundations, Interpretation and the Canonical Quantisation Programme: 1. Classical Hamiltonian formulation of general relativity; 2. The problem of time, locality and the interpretation of quantum mechanics; 3. The programme of canonical quantisation; 4. The new canonical variables of Ashtekar for general relativity; Part II. Foundations of Modern Canonical Quantum General Relativity: 5. Introduction; 6. Step I: the holonomy-flux algebra [P]; 7. Step II: quantum-algebra; 8. Step III: representation theory of [A]; 9. Step IV: 1. Implementation and solution of the kinematical constraints; 10. Step V: 2. Implementation and solution of the Hamiltonian constraint; 11. Step VI: semiclassical analysis; Part III. Physical Applications: 12. Extension to standard matter; 13. Kinematical geometrical operators; 14. Spin foam models; 15. Quantum black hole physics; 16. Applications to particle physics and quantum cosmology; 17. Loop quantum gravity phenomenology; Part IV. Mathematical Tools and their Connection to Physics: 18. Tools from general topology; 19. Differential, Riemannian, symplectic and complex geometry; 20. Semianalytical category; 21. Elements of fibre bundle theory; 22. Holonomies on non-trivial fibre bundles; 23. Geometric quantisation; 24. The Dirac algorithm for field theories with constraints; 25. Tools from measure theory; 26. Elementary introduction to Gel'fand theory for Abelean C* algebras; 27. Bohr compactification of the real line; 28. Operatir -algebras and spectral theorem; 29. Refined algebraic quantisation (RAQ) and direct integral decomposition (DID); 30. Basics of harmonic analysis on compact Lie groups; 31. Spin network functions for SU(2); 32. + Functional analytical description of classical connection dynamics; Bibliography; Index.

  5. PRELIMINARY STUDY ON APPLICATION OF MAX PLUS ALGEBRA IN DISTRIBUTED STORAGE SYSTEM THROUGH NETWORK CODING

    Directory of Open Access Journals (Sweden)

    Agus Maman Abadi

    2016-04-01

    Full Text Available The increasing need in techniques of storing big data presents a new challenge. One way to address this challenge is the use of distributed storage systems. One strategy that implemented in distributed data storage systems is the use of Erasure Code which applied to network coding. The code used in this technique is based on the algebraic structure which is called as vector space. Some studies have also been carried out to create code that is based on other algebraic structures such as module.  In this study, we are going to try to set up a code based on the algebraic structure which is a generalization of the module that is semimodule by utilizing the max operations and sum operations at max plus algebra. The results of this study indicate that the max operation and the addition operation on max plus algebra cannot be used to establish a semimodule code, but by modifying the operation "+" as "min", we get a code based on semimodule. Keywords:   code, distributed storage systems, network coding, semimodule, max plus algebra

  6. V/V(max) test applied to SMM gamma-ray bursts

    Science.gov (United States)

    Matz, S. M.; Higdon, J. C.; Share, G. H.; Messina, D. C.; Iadicicco, A.

    1992-01-01

    We have applied the V/V(max) test to candidate gamma-ray bursts detected by the Gamma-Ray Spectrometer (GRS) aboard the SMM satellite to examine quantitatively the uniformity of the burst source population. For a sample of 132 candidate bursts identified in the GRS data by an automated search using a single uniform trigger criterion we find average V/V(max) = 0.40 +/- 0.025. This value is significantly different from 0.5, the average for a uniform distribution in space of the parent population of burst sources; however, the shape of the observed distribution of V/V(max) is unusual and our result conflicts with previous measurements. For these reasons we can currently draw no firm conclusion about the distribution of burst sources.

  7. Quantum Riemannian geometry of phase space and nonassociativity

    Directory of Open Access Journals (Sweden)

    Beggs Edwin J.

    2017-04-01

    Full Text Available Noncommutative or ‘quantum’ differential geometry has emerged in recent years as a process for quantizing not only a classical space into a noncommutative algebra (as familiar in quantum mechanics but also differential forms, bundles and Riemannian structures at this level. The data for the algebra quantisation is a classical Poisson bracket while the data for quantum differential forms is a Poisson-compatible connection. We give an introduction to our recent result whereby further classical data such as classical bundles, metrics etc. all become quantised in a canonical ‘functorial’ way at least to 1st order in deformation theory. The theory imposes compatibility conditions between the classical Riemannian and Poisson structures as well as new physics such as typical nonassociativity of the differential structure at 2nd order. We develop in detail the case of ℂℙn where the commutation relations have the canonical form [wi, w̄j] = iλδij similar to the proposal of Penrose for quantum twistor space. Our work provides a canonical but ultimately nonassociative differential calculus on this algebra and quantises the metric and Levi-Civita connection at lowest order in λ.

  8. Robust FDI for a Class of Nonlinear Networked Systems with ROQs

    Directory of Open Access Journals (Sweden)

    An-quan Sun

    2014-01-01

    Full Text Available This paper considers the robust fault detection and isolation (FDI problem for a class of nonlinear networked systems (NSs with randomly occurring quantisations (ROQs. After vector augmentation, Lyapunov function is introduced to ensure the asymptotically mean-square stability of fault detection system. By transforming the quantisation effects into sector-bounded parameter uncertainties, sufficient conditions ensuring the existence of fault detection filter are proposed, which can reduce the difference between output residuals and fault signals as small as possible under H∞ framework. Finally, an example linearized from a vehicle system is introduced to show the efficiency of the proposed fault detection filter.

  9. MAX: an expert system for running the modular transport code APOLLO II

    International Nuclear Information System (INIS)

    Loussouarn, O.; Ferraris, C.; Boivineau, A.

    1990-01-01

    MAX is an expert system built to help users of the APOLLO II code to prepare the input data deck to run a job. APOLLO II is a modular transport-theory code for calculating the neutron flux in various geometries. The associated GIBIANE command language allows the user to specify the physical structure and the computational method to be used in the calculation. The purpose of MAX is to bring into play expertise in both neutronic and computing aspects of the code, as well as various computational schemes, in order to generate automatically a batch data set corresponding to the APOLLO II calculation desired by the user. MAX is implemented on the SUN 3/60 workstation with the S1 tool and graphic interface external functions

  10. School Dress Codes and Uniform Policies.

    Science.gov (United States)

    Anderson, Wendell

    2002-01-01

    Opinions abound on what students should wear to class. Some see student dress as a safety issue; others see it as a student-rights issue. The issue of dress codes and uniform policies has been tackled in the classroom, the boardroom, and the courtroom. This Policy Report examines the whole fabric of the debate on dress codes and uniform policies…

  11. Skin carcinogenesis following uniform and non-uniform β irradiation

    International Nuclear Information System (INIS)

    Charles, M.W.; Williams, J.P.; Coggle, J.E.

    1989-01-01

    Where workers or the general public may be exposed to ionising radiation, the irradiation is rarely uniform. The risk figures and dose limits recommended by the International Commission on Radiological Protection (ICRP) are based largely on clinical and epidemiological studies of reasonably uniform irradiated organs. The paucity of clinical or experimental data for highly non-uniform exposures has prevented the ICRP from providing adequate recommendations. This weakness has led on a number of occasions to the postulate that highly non-uniform exposures of organs could be 100,000 times more carcinogenic than ICRP risk figures would predict. This so-called ''hot-particle hypothesis'' found little support among reputable radiobiologists, but could not be clearly and definitively refuted on the basis of experiment. An experiment, based on skin tumour induction in mouse skin, is described which was developed to test the hypothesis. The skin of 1200 SAS/4 male mice has been exposed to a range of uniform and non-uniform sources of the β emitter 170 Tm (E max ∼ 1 MeV). Non-uniform exposures were produced using arrays of 32 or 8 2-mm diameter sources distributed over the same 8-cm 2 area as a uniform control source. Average skin doses varied from 2-100 Gy. The results for the non-uniform sources show a 30% reduction in tumour incidence by the 32-point array at the lower mean doses compared with the response from uniform sources. The eight-point array showed an order-of-magnitude reduction in tumour incidence compared to uniform irradiation at low doses. These results, in direct contradiction to the ''hot particle hypothesis'', indicate that non-uniform exposures produce significantly fewer tumours than uniform exposures. (author)

  12. Image Vector Quantization codec indexes filtering

    Directory of Open Access Journals (Sweden)

    Lakhdar Moulay Abdelmounaim

    2012-01-01

    Full Text Available Vector Quantisation (VQ is an efficient coding algorithm that has been widely used in the field of video and image coding, due to its fast decoding efficiency. However, the indexes of VQ are sometimes lost because of signal interference during the transmission. In this paper, we propose an efficient estimation method to conceal and recover the lost indexes on the decoder side, to avoid re-transmitting the whole image again. If the image or video has the limitation of a period of validity, re-transmitting the data wastes the resources of time and network bandwidth. Therefore, using the originally received correct data to estimate and recover the lost data is efficient in time-constrained situations, such as network conferencing or mobile transmissions. In nature images, the pixels are correlated with their neighbours and VQ partitions the image into sub-blocks and quantises them to the indexes that are transmitted; the correlation between adjacent indexes is very strong. There are two parts of the proposed method. The first is pre-processing and the second is an estimation process. In pre-processing, we modify the order of codevectors in the VQ codebook to increase the correlation among the neighbouring vectors. We then use a special filtering method in the estimation process. Using conventional VQ to compress the Lena image and transmit it without any loss of index can achieve a PSNR of 30.429 dB on the decoder. The simulation results demonstrate that our method can estimate the indexes to achieve PSNR values of 29.084 and 28.327 dB when the loss rate is 0.5% and 1%, respectively.

  13. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  14. Uniform emergency codes: will they improve safety?

    Science.gov (United States)

    2005-01-01

    There are pros and cons to uniform code systems, according to emergency medicine experts. Uniformity can be a benefit when ED nurses and other staff work at several facilities. It's critical that your staff understand not only what the codes stand for, but what they must do when codes are called. If your state institutes a new system, be sure to hold regular drills to familiarize your ED staff.

  15. Dynamics of quantised vortices in superfluids

    CERN Document Server

    Sonin, Edouard B

    2016-01-01

    A comprehensive overview of the basic principles of vortex dynamics in superfluids, this book addresses the problems of vortex dynamics in all three superfluids available in laboratories (4He, 3He, and BEC of cold atoms) alongside discussions of the elasticity of vortices, forces on vortices, and vortex mass. Beginning with a summary of classical hydrodynamics, the book guides the reader through examinations of vortex dynamics from large scales to the microscopic scale. Topics such as vortex arrays in rotating superfluids, bound states in vortex cores and interaction of vortices with quasiparticles are discussed. The final chapter of the book considers implications of vortex dynamics to superfluid turbulence using simple scaling and symmetry arguments. Written from a unified point of view that avoids complicated mathematical approaches, this text is ideal for students and researchers working with vortex dynamics in superfluids, superconductors, magnetically ordered materials, neutron stars and cosmological mo...

  16. Newtonian quantum gravity

    International Nuclear Information System (INIS)

    Jones, K.R.W.

    1995-01-01

    We develop a nonlinear quantum theory of Newtonian gravity consistent with an objective interpretation of the wavefunction. Inspired by the ideas of Schroedinger, and Bell, we seek a dimensional reduction procedure to map complex wavefunctions in configuration space onto a family of observable fields in space-time. Consideration of quasi-classical conservation laws selects the reduced one-body quantities as the basis for an explicit quasi-classical coarse-graining. These we interpret as describing the objective reality of the laboratory. Thereafter, we examine what may stand in the role of the usual Copenhagen observer to localise this quantity against macroscopic dispersion. Only a tiny change is needed, via a generically attractive self-potential. A nonlinear treatment of gravitational self-energy is thus advanced. This term sets a scale for all wavepackets. The Newtonian cosmology is thus closed, without need of an external observer. Finally, the concept of quantisation is re-interpreted as a nonlinear eigenvalue problem. To illustrate, we exhibit an elementary family of gravitationally self-bound solitary waves. Contrasting this theory with its canonically quantised analogue, we find that the given interpretation is empirically distinguishable, in principle. This result encourages deeper study of nonlinear field theories as a testable alternative to canonically quantised gravity. (author). 46 refs., 5 figs

  17. Min-Max decoding for non binary LDPC codes

    OpenAIRE

    Savin, Valentin

    2008-01-01

    Iterative decoding of non-binary LDPC codes is currently performed using either the Sum-Product or the Min-Sum algorithms or slightly different versions of them. In this paper, several low-complexity quasi-optimal iterative algorithms are proposed for decoding non-binary codes. The Min-Max algorithm is one of them and it has the benefit of two possible LLR domain implementations: a standard implementation, whose complexity scales as the square of the Galois field's cardinality and a reduced c...

  18. Gravitation and cosmology with York time

    Science.gov (United States)

    Roser, Philipp

    Despite decades of inquiry an adequate theory of 'quantum gravity' has remained elusive, in part due to the absence of data that would guide the search and in part due to technical difficulties, prominently among them the 'problem of time'. The problem is a result of the attempt to quantise a classical theory with temporal reparameterisation and refoliation invariance such as general relativity. One way forward is therefore the breaking of this invariance via the identification of a preferred foliation of spacetime into parameterised spatial slices. In this thesis we argue that a foliation into slices of constant extrinsic curvature, parameterised by 'York time', is a viable contender. We argue that the role of York time in the initial-value problem of general relativity as well as a number of the parameter's other properties make it the most promising candidate for a physically preferred notion of time. A Hamiltonian theory describing gravity in the York-time picture may be derived from general relativity by 'Hamiltonian reduction', a procedure that eliminates certain degrees of freedom -- specifically the local scale and its rate of change -- in favour of an explicit time parameter and a functional expression for the associated Hamiltonian. In full generality this procedure is impossible to carry out since the equation that determines the Hamiltonian cannot be solved using known methods. However, it is possible to derive explicit Hamiltonian functions for cosmological scenarios (where matter and geometry is treated as spatially homogeneous). Using a perturbative expansion of the unsolvable equation enables us to derive a quantisable Hamiltonian for cosmological perturbations on such a homogeneous background. We analyse the (classical) theories derived in this manner and look at the York-time description of a number of cosmological processes. We then proceed to apply the canonical quantisation procedure to these systems and analyse the resulting quantum theories

  19. Impact of School Uniforms on Student Discipline and the Learning Climate: A Comparative Case Study of Two Middle Schools with Uniform Dress Codes and Two Middle Schools without Uniform Dress Codes

    Science.gov (United States)

    Dulin, Charles Dewitt

    2016-01-01

    The purpose of this research is to evaluate the impact of uniform dress codes on a school's climate for student behavior and learning in four middle schools in North Carolina. The research will compare the perceptions of parents, teachers, and administrators in schools with uniform dress codes against schools without uniform dress codes. This…

  20. Active Fault Near-Source Zones Within and Bordering the State of California for the 1997 Uniform Building Code

    Science.gov (United States)

    Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.

    2000-01-01

    The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.

  1. Anosov actions on non-commutative algebras

    International Nuclear Information System (INIS)

    Emch, G.G.; Narnhofer, H.; Thirring, W.; Sewell, G.L.

    1994-01-01

    We construct an axiomatic framework for a quantum mechanical extension to the theory of Anosov systems, and show that this retains some of the characteristic features of its classical counterpart, e.g. positive Lyapunov exponents, a vectorial K-property, and exponential clustering. We then investigate the effects of quantisation on two prototype examples of Anosov systems, namely the iterations of an automorphism of the torus (the 'Arnold Cat' model) and the free dynamics of a particle on a surface of negative curvature. It emerges that the Anosov property survives quantisation in the case of the former model, but not of the latter one. Finally, we show that the modular dynamics of a relativistic quantum field on the Rindler wedge of Minkowski space is that of an Anosov system. (authors)

  2. Frank Lloyd Wright in the Soviet Union

    Directory of Open Access Journals (Sweden)

    Brian A. Spencer

    2017-12-01

    Full Text Available In 1937 the First All-Union Congress of Soviet Architects was held in Moscow. The congress brought  architects from all areas of the  Soviet Union. Under the auspices of Vsesoiuvnoe Obshchestvo Kul'turnoi Sviazi s zagranitsei (VOKS it invited international architects from Europe and North and South America.  The Organizing Committee of the Union of Soviet Architects invited Frank Lloyd Wright from the United States. Frank Lloyd Wright presented his philosophy and exhibited his work, specifically his designs for the weekend home for E. J. Kaufmann "Fallingwater" and the drawings for the S.C. Johnson Administration. Frank Lloyd Wright's presentation did not focus heavily on the architecture but, rather the spirit of the Russian and Soviet vision.

  3. Echo-waveform classification using model and model free techniques: Experimental study results from central western continental shelf of India

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Navelkar, G.S.; Desai, R.G.P.; Janakiraman, G.; Mahale, V.; Fernandes, W.A.; Rao, N.

    seafloor of India, but unable to provide a suitable means for seafloor classification. This paper also suggests a hybrid artificial neural network (ANN) architecture i.e. Learning Vector Quantisation (LVQ) for seafloor classification. An analysis...

  4. The relation between classical and quantum mechanics

    International Nuclear Information System (INIS)

    Taylor, Peter.

    1984-01-01

    The thesis examines the relationship between classical and quantum mechanics from philosophical, mathematical and physical standpoints. Arguments are presented in favour of 'conjectural realism' in scientific theories, distinguished by explicit contextual structure and empirical testability. The formulations of classical and quantum mechanics, based on a general theory of mechanics is investigated, as well as the mathematical treatments of these subjects. Finally the thesis questions the validity of 'classical limits' and 'quantisations' in intertheoretic reduction. (UK)

  5. CMOS SPAD-based image sensor for single photon counting and time of flight imaging

    OpenAIRE

    Dutton, Neale Arthur William

    2016-01-01

    The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications....

  6. Evidence for Quantisation in Planetary Ring Systems

    OpenAIRE

    WAYTE, RICHARD

    2017-01-01

    Absolute radial positions of the main features in Saturn's ring system have been calculated by adapting the quantum theory of atomic spectra. Fine rings superimposed upon broad rings are found to be covered by a harmonic series of the form N α A(r)1/2, where N and A are integers. Fourier analysis of the ring system shows that the spectral amplitude fits a response profile which is characteristic of a resonant system. Rings of Jupiter, Uranus and Neptune also obey the same rules. Involvement o...

  7. Intracavitary dosimetry of a high-activity remote loading device with oscillating source

    International Nuclear Information System (INIS)

    Arcovito, G.; Piermattei, A.; D'Abramo, G.; Bassi, F.A.

    1984-01-01

    Dosimetric experiments have been carried out in water around a Fletcher applicator loaded by a Buchler system containing two 137 Cs 148 GBq (4 Ci) sources and one 192 Ir 740 GBq (20 Ci) source. The mechanical system which controls the movement of the 192 Ir source and the resulting motion of the source are described. The dose distribution around the sources was measured photographically and by a PWT Normal 0.22 cm 3 ionisation chamber. The absolute dose rate was measured along the lateral axes of the sources. The measurements of exposure in water near the sources were corrected for the effect due to the finite volume of the chamber. The ''quantisation method'' described by Cassell (1983) was utilised to calculate the variation of the dose rate along the lateral axes of the sources. The dose distribution around both 192 Ir and 137 Cs sources was found to be spherical for angles greater than 40 0 from the longitudinal axes of the sources. A simple algorithm fitting the data for the moving 192 Ir source is proposed. A program written in FORTRAN IV and run on a Univac 1100/80 computer has been used to plot dose distributions on anatomical data obtained from CT images. (author)

  8. Geochemical characterization of oceanic basalts using artificial neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Das, P.; Iyer, S.D.

    method is specifically needed to identify the OFB as normal (N-MORB), enriched (E-MORB) and ocean island basalts (OIB). Artificial Neural Network (ANN) technique as a supervised Learning Vector Quantisation (LVQ) is applied to identify the inherent...

  9. Quantum localisation on the circle

    Science.gov (United States)

    Fresneda, Rodrigo; Gazeau, Jean Pierre; Noguera, Diego

    2018-05-01

    Covariant integral quantisation using coherent states for semi-direct product groups is implemented for the motion of a particle on the circle. In this case, the phase space is the cylinder, which is viewed as a left coset of the Euclidean group E(2). Coherent states issued from fiducial vectors are labeled by points in the cylinder and depend also on extra parameters. We carry out the corresponding quantisations of the basic classical observables, particularly the angular momentum and the 2π-periodic discontinuous angle function. We compute their corresponding lower symbols. The quantum localisation on the circle is examined through the properties of the angle operator yielded by our procedure, its spectrum and lower symbol, its commutator with the quantum angular momentum, and the resulting Heisenberg inequality. Comparison with other approaches to the long-standing question of the quantum angle is discussed.

  10. Quantum mechanics of a generalised rigid body

    International Nuclear Information System (INIS)

    Gripaios, Ben; Sutherland, Dave

    2016-01-01

    We consider the quantum version of Arnold’s generalisation of a rigid body in classical mechanics. Thus, we quantise the motion on an arbitrary Lie group manifold of a particle whose classical trajectories correspond to the geodesics of any one-sided-invariant metric. We show how the derivation of the spectrum of energy eigenstates can be simplified by making use of automorphisms of the Lie algebra and (for groups of type I) by methods of harmonic analysis. We show how the method can be extended to cosets, generalising the linear rigid rotor. As examples, we consider all connected and simply connected Lie groups up to dimension 3. This includes the universal cover of the archetypical rigid body, along with a number of new exactly solvable models. We also discuss a possible application to the topical problem of quantising a perfect fluid. (paper)

  11. Introduction to gauge field theory

    International Nuclear Information System (INIS)

    Bailin, D.; Love, A.

    1986-01-01

    This book provides a postgraduate level introduction to gauge field theory entirely from a path integral standpoint without any reliance on the more traditional method of canonical quantisation. The ideas are developed by quantising the self-interacting scalar field theory, and are then used to deal with all the gauge field theories relevant to particle physics, quantum electrodynamics, quantum chromodynamics, electroweak theory, grand unified theories, and field theories at non-zero temperature. The use of these theories to make precise experimental predictions requires the development of the renormalised theories. This book provides a knowledge of relativistic quantum mechanics, but not of quantum field theory. The topics covered form a foundation for a knowledge of modern relativistic quantum field theory, providing a comprehensive coverage with emphasis on the details of actual calculations rather than the phenomenology of the applications

  12. Student Dress Codes and Uniforms. Research Brief

    Science.gov (United States)

    Johnston, Howard

    2009-01-01

    According to an Education Commission of the States "Policy Report", research on the effects of dress code and school uniform policies is inconclusive and mixed. Some researchers find positive effects; others claim no effects or only perceived effects. While no state has legislatively mandated the wearing of school uniforms, 28 states and…

  13. Advancing Shannon Entropy for Measuring Diversity in Systems

    Directory of Open Access Journals (Sweden)

    R. Rajaram

    2017-01-01

    Full Text Available From economic inequality and species diversity to power laws and the analysis of multiple trends and trajectories, diversity within systems is a major issue for science. Part of the challenge is measuring it. Shannon entropy H has been used to rethink diversity within probability distributions, based on the notion of information. However, there are two major limitations to Shannon’s approach. First, it cannot be used to compare diversity distributions that have different levels of scale. Second, it cannot be used to compare parts of diversity distributions to the whole. To address these limitations, we introduce a renormalization of probability distributions based on the notion of case-based entropy Cc as a function of the cumulative probability c. Given a probability density p(x, Cc measures the diversity of the distribution up to a cumulative probability of c, by computing the length or support of an equivalent uniform distribution that has the same Shannon information as the conditional distribution of p^c(x up to cumulative probability c. We illustrate the utility of our approach by renormalizing and comparing three well-known energy distributions in physics, namely, the Maxwell-Boltzmann, Bose-Einstein, and Fermi-Dirac distributions for energy of subatomic particles. The comparison shows that Cc is a vast improvement over H as it provides a scale-free comparison of these diversity distributions and also allows for a comparison between parts of these diversity distributions.

  14. A massive spinless particle and the unit of length in a spinor geometry

    International Nuclear Information System (INIS)

    Lynch, J.T.

    1999-01-01

    The field equations of a spinor geometry are solved for a massive spinless particle. The particle has a composite internal structure, a quantised rest-mass, and a positive-definite and everywhere finite mass density. The particle is stable in isolation, but evidently unstable in the presence of fields due to external sources, such as the electromagnetic fields of particle detectors. On identifying the particle as a neutral meson, the unit of length of the geometry turns out to be approximately 10 -15 m

  15. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    International Nuclear Information System (INIS)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user's guide for the code MAXED is included in an appendix. The code is available from the authors upon request

  16. Consumer-led health-related online sources and their impact on consumers: An integrative review of the literature.

    Science.gov (United States)

    Laukka, Elina; Rantakokko, Piia; Suhonen, Marjo

    2017-04-01

    The aim of the review was to describe consumer-led health-related online sources and their impact on consumers. The review was carried out as an integrative literature review. Quantisation and qualitative content analysis were used as the analysis method. The most common method used by the included studies was qualitative content analysis. This review identified the consumer-led health-related online sources used between 2009 and 2016 as health-related online communities, health-related social networking sites and health-related rating websites. These sources had an impact on peer support; empowerment; health literacy; physical, mental and emotional wellbeing; illness management; and relationships between healthcare organisations and consumers. The knowledge of the existence of the health-related online sources provides healthcare organisations with an opportunity to listen to their consumers' 'voice'. The sources make healthcare consumers more competent actors in relation to healthcare, and the knowledge of them is a valuable resource for healthcare organisations. Additionally, these health-related online sources might create an opportunity to reduce the need for drifting among the healthcare services. Healthcare policymakers and organisations could benefit from having a strategy of increasing their health-related online sources.

  17. Ranking Based Locality Sensitive Hashing Enabled Cancelable Biometrics: Index-of-Max Hashing

    OpenAIRE

    Jin, Zhe; Lai, Yen-Lung; Hwang, Jung-Yeon; Kim, Soohyung; Teoh, Andrew Beng Jin

    2017-01-01

    In this paper, we propose a ranking based locality sensitive hashing inspired two-factor cancelable biometrics, dubbed "Index-of-Max" (IoM) hashing for biometric template protection. With externally generated random parameters, IoM hashing transforms a real-valued biometric feature vector into discrete index (max ranked) hashed code. We demonstrate two realizations from IoM hashing notion, namely Gaussian Random Projection based and Uniformly Random Permutation based hashing schemes. The disc...

  18. Fulltext PDF

    Indian Academy of Sciences (India)

    During that year, he published a set offour papers entitled 'Quantisation as ... Schrodinger says in the preface to his My View ofthe World in 1960" In 1918, when I was ... honoured the founders of quantum mechanics; the prize for 1932 was ...

  19. 2 + 1 quantum gravity as a toy model for the 3 + 1 theory

    International Nuclear Information System (INIS)

    Ashtekar, A.; Husain, V.; Smolin, L.; Samuel, J.; Utah Univ., Salt Lake City, UT

    1989-01-01

    2 + 1 Einstein gravity is used as a toy model for testing a program for non-perturbative canonical quantisation of the 3 + 1 theory. The program can be successfully implemented in the model and leads to a surprisingly rich quantum theory. (author)

  20. From classical to quantum fields

    CERN Document Server

    Baulieu, Laurent; Sénéor, Roland

    2017-01-01

    Quantum Field Theory has become the universal language of most modern theoretical physics. This introductory textbook shows how this beautiful theory offers the correct mathematical framework to describe and understand the fundamental interactions of elementary particles. The book begins with a brief reminder of basic classical field theories, electrodynamics and general relativity, as well as their symmetry properties, and proceeds with the principles of quantisation following Feynman's path integral approach. Special care is used at every step to illustrate the correct mathematical formulation of the underlying assumptions. Gauge theories and the problems encountered in their quantisation are discussed in detail. The last chapters contain a full description of the Standard Model of particle physics and the attempts to go beyond it, such as grand unified theories and supersymmetry. Written for advanced undergraduate and beginning graduate students in physics and mathematics, the book could also serve as a re...

  1. K-means Clustering: Lloyd's algorithm

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. K-means Clustering: Lloyd's algorithm. Refines clusters iteratively. Cluster points using Voronoi partitioning of the centers; Centroids of the clusters determine the new centers. Bad example k = 3, n =4.

  2. Devaney's chaos on uniform limit maps

    International Nuclear Information System (INIS)

    Yan Kesong; Zeng Fanping; Zhang Gengrong

    2011-01-01

    Highlights: → The transitivity may not been inherited even if the sequence functions mixing. → The sensitivity may not been inherited even if the iterates of sequence have some uniform convergence. → Some equivalence conditions for the transitivity and sensitivity for uniform limit function are given. → A non-transitive sequence may converge uniformly to a transitive map. - Abstract: Let (X, d) be a compact metric space and f n : X → X a sequence of continuous maps such that (f n ) converges uniformly to a map f. The purpose of this paper is to study the Devaney's chaos on the uniform limit f. On the one hand, we show that f is not necessarily transitive even if all f n mixing, and the sensitive dependence on initial conditions may not been inherited to f even if the iterates of the sequence have some uniform convergence, which correct two wrong claims in . On the other hand, we give some equivalence conditions for the uniform limit f to be transitive and to have sensitive dependence on initial conditions. Moreover, we present an example to show that a non-transitive sequence may converge uniformly to a transitive map.

  3. Long-Time Dynamics of Open Quantum Systems

    DEFF Research Database (Denmark)

    Westrich, Matthias

    Matthias Westrich studied the effective evolution of atoms coupled to quantised fields and determined the rate at which atoms relax to their ground state. For positive temperature, he also characterised properties of (quasi-)stationary states for atoms in contact with a heat bath and driven...

  4. Topological properties and global structure of space-time

    International Nuclear Information System (INIS)

    Bergmann, P.G.; De Sabbata, V.

    1986-01-01

    This book presents information on the following topics: measurement of gravity and gauge fields using quantum mechanical probes; gravitation at spatial infinity; field theories on supermanifolds; supergravities and Kaluza-Klein theories; boundary conditions at spatial infinity; singularities - global and local aspects; matter at the horizon of the Schwarzschild black hole; introluction to string theories; cosmic censorship and the strengths of singularities; conformal quantisation in singular spacetimes; solar system tests in transition; integration and global aspects of supermanifolds; the space-time of the bimetric general relativity theory; gravitation without Lorentz invariance; a uniform static magnetic field in Kaluza-Klein theory; introduction to topological geons; and a simple model of a non-asymptotically flat Schwarzschild black hole

  5. Phase space approach to quantum dynamics

    International Nuclear Information System (INIS)

    Leboeuf, P.

    1991-03-01

    The Schroedinger equation for the time propagation of states of a quantised two-dimensional spherical phase space is replaced by the dynamics of a system of N particles lying in phase space. This is done through factorization formulae of analytic function theory arising in coherent-state representation, the 'particles' being the zeros of the quantum state. For linear Hamiltonians, like a spin in a uniform magnetic field, the motion of the particles is classical. However, non-linear terms induce interactions between the particles. Their time propagation is studied and it is shown that, contrary to integrable systems, for chaotic maps they tend to fill, as their classical counterpart, the whole phase space. (author) 13 refs., 3 figs

  6. Prostate dose calculations for permanent implants using the MCNPX code and the Voxels phantom MAX

    Energy Technology Data Exchange (ETDEWEB)

    Reis Junior, Juraci Passos dos; Silva, Ademir Xavier da, E-mail: jjunior@con.ufrj.b, E-mail: Ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Facure, Alessandro N.S., E-mail: facure@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)

    2010-07-01

    This paper presents the modeling of 80, 88 and 100 of {sup 125}I seeds, punctual and volumetric inserted into the phantom spherical volume representing the prostate and prostate phantom voxels MAX. Starting values of minimum and maximum activity, 0.27 mCi and 0.38 mCi, respectively, were simulated in the Monte Carlo code MCNPX in order to determine whether the final dose, according to the integration of the equation of decay at time t = 0 to t = {infinity} corresponds to the default value set by the AAPM 64 which is 144 Gy. The results showed that consider sources results in doses exceeding the percentage discrepancy of the default value of 200%, while volumetric consider sources result in doses close to 144 Gy. (author)

  7. Prostate dose calculations for permanent implants using the MCNPX code and the Voxels phantom MAX

    International Nuclear Information System (INIS)

    Reis Junior, Juraci Passos dos; Silva, Ademir Xavier da

    2010-01-01

    This paper presents the modeling of 80, 88 and 100 of 125 I seeds, punctual and volumetric inserted into the phantom spherical volume representing the prostate and prostate phantom voxels MAX. Starting values of minimum and maximum activity, 0.27 mCi and 0.38 mCi, respectively, were simulated in the Monte Carlo code MCNPX in order to determine whether the final dose, according to the integration of the equation of decay at time t = 0 to t = ∞ corresponds to the default value set by the AAPM 64 which is 144 Gy. The results showed that consider sources results in doses exceeding the percentage discrepancy of the default value of 200%, while volumetric consider sources result in doses close to 144 Gy. (author)

  8. Analysis of Age and Gender Structures for ICD-10 Diagnoses in Outpatient Treatment Using Shannon's Entropy.

    Science.gov (United States)

    Schuster, Fabian; Ostermann, Thomas; Emcke, Timo; Schuster, Reinhard

    2017-01-01

    Diagnostic diversity has been in the focus of several studies of health services research. As the fraction of people with statutory health insurance changes with age and gender it is assumed that diagnostic diversity may be influenced by these parameters. We analyze fractions of patients in Schleswig-Holstein with respect to the chapters of the ICD-10 code in outpatient treatment for quarter 2/2016 with respect to age and gender/sex of the patient. In a first approach we analyzed which diagnose chapters are most relevant in dependence of age and gender. To detect diagnostic diversity, we finally applied Shannon's entropy measure. Due to multimorbidity we used different standardizations. Shannon entropy strongly increases for women after the age of 15, reaching a limit level at the age of 50 years. Between 15 and 70 years we get higher values for women, after 75 years for men. This article describes a straight forward pragmatic approach to diagnostic diversity using Shannon's Entropy. From a methodological point of view, the use of Shannon's entropy as a measure for diversity should gain more attraction to researchers of health services research.

  9. Canonical operator formulation of nonequilibrium thermodynamics

    International Nuclear Information System (INIS)

    Mehrafarin, M.

    1992-09-01

    A novel formulation of nonequilibrium thermodynamics is proposed which emphasises the fundamental role played by the Boltzmann constant k in fluctuations. The equivalence of this and the stochastic formulation is demonstrated. The k → 0 limit of this theory yields the classical deterministic description of nonequilibrium thermodynamics. The new formulation possesses unique features which bear two important results namely the thermodynamic uncertainty principle and the quantisation of entropy production rate. Such a theory becomes indispensable whenever fluctuations play a significant role. (author). 7 refs

  10. Curci-Ferrari mass and the Neuberger problem

    International Nuclear Information System (INIS)

    Kalloniatis, A.C.; Smekal, L. von; Williams, A.G.

    2005-01-01

    We study the massive Curci-Ferrari model as a starting point for defining BRST quantisation for Yang-Mills theory on the lattice. In particular, we elucidate this proposal in light of topological approaches to gauge-fixing and study the case of a simple one-link Abelian model

  11. Axial anomaly at finite temperature

    International Nuclear Information System (INIS)

    Chaturvedi, S.; Gupte, Neelima; Srinivasan, V.

    1985-01-01

    The Jackiw-Bardeen-Adler anomaly for QED 4 and QED 2 are calculated at finite temperature. It is found that the anomaly is independent of temperature. Ishikawa's method [1984, Phys. Rev. Lett. vol. 53 1615] for calculating the quantised Hall effect is extended to finite temperature. (author)

  12. Digitally assisted analog beamforming for millimeter-wave communication

    NARCIS (Netherlands)

    Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria

    2015-01-01

    The paper addresses the research question on how digital beamsteering algorithms can be combined with analog beamforming in the context of millimeter-wave communication for next generation (5G) cellular systems. Key is the use of coarse quantisation of the individual antenna signals next to the

  13. Monte Carlo Calculations of Dose to Medium and Dose to Water for Carbon Ion Beams in Various Media

    DEFF Research Database (Denmark)

    Herrmann, Rochus; Petersen, Jørgen B.B.; Jäkel, Oliver

    treatment plans. Here, we quantisize the effect of dose to water vs. dose to medium for a series of typical target materials found in medical physics. 2     Material and Methods The Monte Carlo code FLUKA [Battistioni et al. 2007] is used to simulate the particle fluence spectrum in a series of target...... for water. This represents the case that our “detector” is an infinitesimal small non-perturbing entity made of water, where charged particle equilibrium can be assumed following the Bragg-Gray cavity theory. Dw and Dm are calculated for typical materials such as bone, brain, lung and soft-tissues using...

  14. On the Fock quantisation of the hydrogen atom

    International Nuclear Information System (INIS)

    Cordani, B.

    1989-01-01

    In a celebrated work, Fock explained the degeneracy of the energy levels of the Kepler problem (or hydrogen atom) (Z. Phys. 98, 145-54, 1935) in terms of the dynamical symmetry group SO(4). Making a stereographic projection in the momentum space and rescaling the momenta with the eigenvalues of the energy, he showed that the problem is equivalent to the geodesic flow on the sphere S 3 . In this way, the 'hidden' symmetry SO(4) is made manifest. The present author has shown that the classical n-dimensional Kepler problem can be better understood by enlarging the phase space of the geodesical motion on S'' and including time and energy as canonical variables: a following symplectomorphism transforms the motion on S'' in the Kepler problem. We want to prove in this paper that the Fock procedure is the implementation at 'quantum' level of the above-mentioned symplectomorphism. The interest is not restricted to the old Kepler problem: more recently two other systems exhibiting the same symmetries have been found. They are the McIntosh-Cisneros-Zwanziger system and the geodesic motion in Euclidean Taub-NUT space. Both have a physical interest: they indeed describe a spinless test particle moving outside the core of a self-dual monopole and the asymptotic scattering of two self-dual monopoles, respectively. (author)

  15. Politicas de uniformes y codigos de vestuario (Uniforms and Dress-Code Policies). ERIC Digest.

    Science.gov (United States)

    Lumsden, Linda

    This digest in Spanish examines schools' dress-code policies and discusses the legal considerations and research findings about the effects of such changes. Most revisions to dress codes involve the use of uniforms, typically as a way to curb school violence and create a positive learning environment. A recent survey of secondary school principals…

  16. 2-Dim. gravity and string theory

    International Nuclear Information System (INIS)

    Narain, K.S.

    1991-01-01

    The role of 2-dim. gravity in string theory is discussed. In particular d=25 string theory coupled to 2-d. gravity is described and shown to give rise to the physics of the usual 26-dim. string theory (where one does not quantise 2-d. gravity. (orig.)

  17. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    Science.gov (United States)

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  18. Collected papers on wave mechanics

    CERN Document Server

    Schrödinger, Erwin

    1929-01-01

    Quantisation as a problem of proper values ; the continuous transition from micro- to macro-mechanics ; on the relation between the quantum mechanics of Heisenberg, Born, and Jordan, and that of Schrödinger ; the Compton effect ; the energy-momentum theorem for material waves ; the exchange of energy according to wave mechanics

  19. Lloyd Morgan's theory of instinct: from Darwinism to neo-Darwinism.

    Science.gov (United States)

    Richards, R J

    1977-01-01

    Darwin's proposal of two sources of instinct--natural selection and inherited habit--fostered among late nineteenth century evolutionists a variety of conflicting notions concerning the mechanisms of evolution. The British comparative psychologist C. Lloyd Morgan was a cardinal figure in restructuring the orthodox Darwinian conception to relieve the confusion besetting it and to meet the demands of the new biology of Weismann. This paper traces the development of Morgan's ideas about instinct against the background of his philosophic assumptions and the views of instinct theorists from Darwin and Romanes to McDougall and Lorenz.

  20. Vacuum Potentials for the Two Only Permanent Free Particles, Proton and Electron. Pair Productions

    International Nuclear Information System (INIS)

    Zheng-Johansson, J X

    2012-01-01

    The two only species of isolatable, smallest, or unit charges +e and −e present in nature interact with the universal vacuum in a polarisable dielectric representation through two uniquely defined vacuum potential functions. All of the non-composite subatomic particles containing one-unit charges, +e or −e, are therefore formed in terms of the IED model of the respective charges, of zero rest masses, oscillating in either of the two unique vacuum potential fields, together with the radiation waves of their own charges. In this paper we give a first principles treatment of the dynamics of charge in a dielectric vacuum, based on which, combined with solutions for the radiation waves obtained previously, we subsequently derive the vacuum potential function for a given charge q, which we show to be quadratic and consist each of quantised potential levels, giving therefore rise to quantised characteristic oscillation frequencies of the charge and accordingly quantised, sharply-defined masses of the IED particles. By further combining with relevant experimental properties as input information, we determine the IED particles built from the charges +e, −e at their first excited states in the respective vacuum potential wells to be the proton and the electron, the observationally two only stable (permanently lived) and 'free' particles containing one-unit charges. Their antiparticles as produced in pair productions can be accordingly determined. The characteristics of all of the other more energetic single-charged non-composite subatomic particles can also be recognised. We finally discuss the energy condition for pair production, which requires two successive energy supplies to (1) first disintegrate the bound pair of vaculeon charges +e, −e composing a vacuuon of the vacuum and (2) impart masses to the disintegrated charges.

  1. The quantum mechanics of the supersymmetric nonlinear sigma-model

    International Nuclear Information System (INIS)

    Davis, A.C.; Macfarlane, A.J.; Popat, P.C.; Holten, J.W. van

    1984-01-01

    The classical and quantum mechanical formalisms of the models are developed. The quantisation is done in such a way that the quantum theory can be represented explicitly in as simple a form as possible, and the problem of ordering of operators is resolved so as to maintain the supersymmetry algebra of the classical theory. (author)

  2. Theoretical Transport Studies of Non-equilibrium Carriers Driven by High Electric Fields

    Science.gov (United States)

    2012-04-25

    impurity scattering has already been worked out in the literature (in partic- ular see [35] or the discussions in [28] or [23]), but we reproduce the result...G. Hasko, D. C. Peacock , D. A. Ritchie, and G. A. C. Jones. “One-dimensional transport and the quantisation of the ballistic resistance.” Journal of

  3. The Avoidance of the Little Sibling of the Big Rip Abrupt Event by a Quantum Approach

    Directory of Open Access Journals (Sweden)

    Imanol Albarran

    2018-02-01

    Full Text Available We address the quantisation of a model that induces the Little Sibling of the Big Rip (LSBR abrupt event, where the dark energy content is described by means of a phantom-like fluid or a phantom scalar field. The quantisation is done in the framework of the Wheeler–DeWitt (WDW equation and imposing the DeWitt boundary condition; i.e., the wave function vanishes close to the abrupt event. We analyse the WDW equation within two descriptions: First, when the dark energy content is described with a perfect fluid. This leaves the problem with the scale factor as the single degree of freedom. Second, when the dark energy content is described with a phantom scalar field in such a way that an additional degree of freedom is incorporated. Here, we have applied the Born–Oppenheimer (BO approximation in order to simplify the WDW equation. In all cases, the obtained wave function vanishes when the LSBR takes place, thus fulfilling the DeWitt boundary condition.

  4. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  5. Comparison between two methodologies for uniformity correction of extensive reference sources

    International Nuclear Information System (INIS)

    Junior, Iremar Alves S.; Siqueira, Paulo de T.D.; Vivolo, Vitor; Potiens, Maria da Penha A.; Nascimento, Eduardo

    2016-01-01

    This article presents the procedures to obtain the uniformity correction factors for extensive reference sources proposed by two different methodologies. The first methodology is presented by the Good Practice Guide of Nº 14 of the NPL, which provides a numerical correction. The second one uses the radiation transport code, MCNP5, to obtain the correction factor. Both methods retrieve very similar corrections factor values, with a maximum deviation of 0.24%. (author)

  6. EOG feature relevance determination for microsleep detection

    Directory of Open Access Journals (Sweden)

    Golz Martin

    2017-09-01

    Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 – 4.9 % and 1.9 – 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 – 0.006 % and 0.002 – 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respectively. GRLVQ permits objective feature reduction by inclusion of all processing stages, but is not as accurate as SVM.

  7. EOG feature relevance determination for microsleep detection

    Directory of Open Access Journals (Sweden)

    Golz Martin

    2017-09-01

    Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 - 4.9 % and 1.9 - 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 - 0.006 % and 0.002 - 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respec-tively. GRLVQ permits objective feature reduction by inclu-sion of all processing stages, but is not as accurate as SVM.

  8. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    Science.gov (United States)

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  9. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  10. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  11. Two-dimensional quantisation of the quasi-Landau hydrogenic spectrum

    International Nuclear Information System (INIS)

    Gallas, J.A.C.; O'Connell, R.F.

    1982-01-01

    Based on the two-dimensional WKB model, an equation is derived from which the non-relativistic quasi-Landau energy spectrum of hydrogen-like atoms may be easily obtained. In addition, the solution of radial equations in the WKB approximation and its relation with models recently used to fit experimental data are discussed. (author)

  12. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  13. Who is afraid of anomalies?

    International Nuclear Information System (INIS)

    Rajaraman, R.

    1990-01-01

    There are situations where gauge symmetry comes into unavoidable conflict with quantum theory. Such situations are examples of what are called 'Anomalies' in quantum field theory. In these cases, although some form of gauge symmetry is present at the classical level, the process of quantisation necessarily destroys that symmetry. How to consistently treat such cases and obtain their novel features is discussed. (author)

  14. 75 FR 31464 - Certification of the Attorney General; Shannon County, SD

    Science.gov (United States)

    2010-06-03

    ... DEPARTMENT OF JUSTICE Certification of the Attorney General; Shannon County, SD In accordance with... within the scope of the determinations of the Attorney General and the Director of the Census made under...., Attorney General of the United States. [FR Doc. 2010-13285 Filed 6-2-10; 8:45 am] BILLING CODE P ...

  15. N=8 supersingleton quantum field theory

    International Nuclear Information System (INIS)

    Bergshoeff, E.; Salam, A.; Sezgin, E.; Tanii, Yoshiaki.

    1988-06-01

    We quantise the N=8 supersymmetric singleton field theory which is formulated on the boundary of the four dimensional anti de Sitter spacetime (AdS 4 ). The theory has rigid OSp(8,4) symmetry which acts as a superconformal group on the boundary of AdS 4 . We show that the generators of this symmetry satisfy the full quantum OSp(8,4) algebra. The spectrum of the theory contains massless states of all higher integer and half-integer spin which fill the irreducible representations of OSp(8,4) with highest spin s max =2,4,6,... Remarkably, these are in one to one correspondence with the generators of Vasiliev's infinite dimensional extended higher spin superalgebra shs(8,4), suggesting that we may have stumbled onto a field theoretic realization of this algebra. We also discuss the possibility of a connection between the N=8 supersingleton theory with the eleven dimensional supermembrane in an AdS 4 xS 7 background. (author). 34 refs

  16. Reducing Conservatism in Aircraft Engine Response Using Conditionally Active Min-Max Limit Regulators

    Science.gov (United States)

    May, Ryan D.; Garg, Sanjay

    2012-01-01

    Current aircraft engine control logic uses a Min-Max control selection structure to prevent the engine from exceeding any safety or operational limits during transients due to throttle commands. This structure is inherently conservative and produces transient responses that are slower than necessary. In order to utilize the existing safety margins more effectively, a modification to this architecture is proposed, referred to as a Conditionally Active (CA) limit regulator. This concept uses the existing Min-Max architecture with the modification that limit regulators are active only when the operating point is close to a particular limit. This paper explores the use of CA limit regulators using a publicly available commercial aircraft engine simulation. The improvement in thrust response while maintaining all necessary safety limits is demonstrated in a number of cases.

  17. Infinite Shannon entropy

    International Nuclear Information System (INIS)

    Baccetti, Valentina; Visser, Matt

    2013-01-01

    Even if a probability distribution is properly normalizable, its associated Shannon (or von Neumann) entropy can easily be infinite. We carefully analyze conditions under which this phenomenon can occur. Roughly speaking, this happens when arbitrarily small amounts of probability are dispersed into an infinite number of states; we shall quantify this observation and make it precise. We develop several particularly simple, elementary, and useful bounds, and also provide some asymptotic estimates, leading to necessary and sufficient conditions for the occurrence of infinite Shannon entropy. We go to some effort to keep technical computations as simple and conceptually clear as possible. In particular, we shall see that large entropies cannot be localized in state space; large entropies can only be supported on an exponentially large number of states. We are for the time being interested in single-channel Shannon entropy in the information theoretic sense, not entropy in a stochastic field theory or quantum field theory defined over some configuration space, on the grounds that this simple problem is a necessary precursor to understanding infinite entropy in a field theoretic context. (paper)

  18. Obituary: Lloyd V. Wallace (1927 - 2015)

    Science.gov (United States)

    Born in 1927 in Detroit, Michigan, in humble circumstances, Lloyd developed an early interest in solar and planetary astronomy and was a protégé of Ralph Nichols, a physics professor at the University of Western Ontario. Later he moved back to the United States and obtained his Ph.D in Astronomy at the University of Michigan in 1957 under Leo Goldberg. It was while he was at the University of Michigan that he met and married his wife, Ruth. At various times in his early career, and as the result of a complex series of events, he held Canadian, British, and United States citizenships and even found time to become an expert professional electrician. On acquiring his degree he obtained a position with Joe Chamberlain at the Yerkes Observatory and began a lifetime association with Chamberlain and Don Hunten (then a visitor to Yerkes) in atmospheric and spectroscopic research. In 1962 they moved to Tucson where Chamberlain became the head of the Space Division at the Kitt Peak National Observatory, a unit set up by the first director, Aden Meinel, to apply advances in technology to astronomical research. Lloyd was hired as the principal experimenter in the observatory's sounding rocket program, which was set up by the National Science Foundation to provide staff and visitor access to the upper atmosphere for research purposes. With this program he supervised a series of 39 Aerobee rocket flights from the White Sands Missile range to investigate upper atmosphere emissions, aeronomic processes, and make astronomical observations over a period of about 10 years. He was also involved in the first attempts to establish a remotely controlled 50&rdquo telescope on Kitt Peak and efforts within the Division to create an Earth orbiting astronomical telescope. In parallel with these activities Lloyd conducted research which was largely focused on spectroscopic investigations. In the early days these included measurement of upper atmospheric emissions, particularly visual dayglow

  19. Diversity of endophytic fungi in Glycine max.

    Science.gov (United States)

    Fernandes, Elio Gomes; Pereira, Olinto Liparini; da Silva, Cynthia Cânedo; Bento, Claudia Braga Pereira; de Queiroz, Marisa Vieira

    2015-12-01

    Endophytic fungi are microorganisms that live within plant tissues without causing disease during part of their life cycle. With the isolation and identification of these fungi, new species are being discovered, and ecological relationships with their hosts have also been studied. In Glycine max, limited studies have investigated the isolation and distribution of endophytic fungi throughout leaves and roots. The distribution of these fungi in various plant organs differs in diversity and abundance, even when analyzed using molecular techniques that can evaluate fungal communities in different parts of the plants, such as denaturing gradient gel electrophoresis (DGGE). Our results show there is greater species richness of culturable endophytic filamentous fungi in the leaves G. max as compared to roots. Additionally, the leaves had high values for diversity indices, i.e. Simpsons, Shannon and Equitability. Conversely, dominance index was higher in roots as compared to leaves. The fungi Ampelomyces sp., Cladosporium cladosporioides, Colletotrichum gloeosporioides, Diaporthe helianthi, Guignardia mangiferae and Phoma sp. were more frequently isolated from the leaves, whereas the fungi Fusarium oxysporum, Fusarium solani and Fusarium sp. were prevalent in the roots. However, by evaluating the two communities by DGGE, we concluded that the species richness was higher in the roots than in the leaves. UPGMA analysis showed consistent clustering of isolates; however, the fungus Leptospora rubella, which belongs to the order Dothideales, was grouped among species of the order Pleosporales. The presence of endophytic Fusarium species in G. max roots is unsurprising, since Fusarium spp. isolates have been previously described as endophyte in other reports. However, it remains to be determined whether the G. max Fusarium endophytes are latent pathogens or non-pathogenic forms that benefit the plant. This study provides a broader knowledge of the distribution of the fungal

  20. Bit-Wise Arithmetic Coding For Compression Of Data

    Science.gov (United States)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  1. Perturbed Chern-Simons theory, fractional statistics, and Yang-Baxter algebra

    International Nuclear Information System (INIS)

    Chatterjee, A.; Sreedhar, V.V.

    1992-01-01

    Topological Chern-Simons theory coupled to matter fields is analysed in the framework of Dirac's method of quantising constrained systems in a general class of linear, non-local gauges. We show that in the weak coupling limit gauge invariant operators in the theory transform under an exchange according to a higher dimensional representation of the braid group which is built out of the fundamental representation matrices of the gauge group and thus behave like anyons. We also discover new solutions of the Yang-Baxter equation which emerges as a consistency condition on the structure functions of the operator algebra of the matter fields. (orig.)

  2. A Limited Submuscular Direct-to-Implant Technique Utilizing AlloMax

    Directory of Open Access Journals (Sweden)

    Michal Brichacek, MD

    2017-07-01

    Full Text Available Background:. This study evaluates a novel limited submuscular direct-to-implant technique utilizing AlloMax where only the upper few centimeters of the implant is covered by the pectoralis, whereas the majority of the implant including the middle and lower poles are covered by acellular dermal matrix. Methods:. The pectoralis muscle is released off its inferior and inferior-medial origins and allowed to retract superiorly. Two sheets of AlloMax (6 × 16 cm are sutured together and secured to the inframammary fold, serratus fascia, and the superiorly retracted pectoralis. Thirty-seven breasts in 19 consecutive patients with follow-up at 6 months were reviewed. Results:. Nineteen consecutive patients with 37 reconstructed breasts were studied. Average age was 50 years, average BMI was 24.3. Ptosis ranged from grade 0–III, and average cup size was B (range, A–DDD. Early minor complications included 1 seroma, 3 minor postoperative hematomas managed conservatively, and 3 minor wound healing problems. Three breasts experienced mastectomy skin flap necrosis and were managed with local excision. There were no cases of postoperative infection, red breast, grade III/IV capsular contractures, or implant loss. A single patient complained of animation postoperatively. One patient desired fat grafting for rippling. Conclusions:. The limited submuscular direct-to-implant technique utilizing AlloMax appears to be safe with a low complication rate at 6 months. This technique minimizes the action of the pectoralis on the implant, reducing animation deformities but still providing muscle coverage of the upper limit of the implant. Visible rippling is reduced, and a vascularized bed remains for fat grafting of the upper pole if required.

  3. On Landauer's Principle and Bound for Infinite Systems

    Science.gov (United States)

    Longo, Roberto

    2018-04-01

    Landauer's principle provides a link between Shannon's information entropy and Clausius' thermodynamical entropy. Here we set up a basic formula for the incremental free energy of a quantum channel, possibly relative to infinite systems, naturally arising by an Operator Algebraic point of view. By the Tomita-Takesaki modular theory, we can indeed describe a canonical evolution associated with a quantum channel state transfer. Such evolution is implemented both by a modular Hamiltonian and a physical Hamiltonian, the latter being determined by its functoriality properties. This allows us to make an intrinsic analysis, extending our QFT index formula, but without any a priori given dynamics; the associated incremental free energy is related to the logarithm of the Jones index and is thus quantised. This leads to a general lower bound for the incremental free energy of an irreversible quantum channel which is half of the Landauer bound, and to further bounds corresponding to the discrete series of the Jones index. In the finite dimensional context, or in the case of DHR charges in QFT, where the dimension is a positive integer, our lower bound agrees with Landauer's bound.

  4. Application of the cyclic permutation for analysis of synthesized sinusoidal signal

    Czech Academy of Sciences Publication Activity Database

    Čížek, Václav; Švandová, Hana

    2002-01-01

    Roč. 2, č. 1 (2002), s. 69-72 ISSN 1335-8243. [Digital Signal Processing and Multimedia Communications DSP-MCOM 2001 /5./. Košice, 27.11.2001-29.11.2001] R&D Projects: GA ČR GA102/00/0958 Institutional research plan: CEZ:AV0Z2067918 Keywords : direct digital synthesis * quantisation-signal * number theory Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  5. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  6. Analysis of internationalization process of Bric countries with the Grubel and Lloyd index

    Directory of Open Access Journals (Sweden)

    Pedro Raffy Vartanian

    2013-08-01

    Full Text Available This research aims to show the internationalization of the BRIC (Brazil, Russia, India and China through the application of the Grubel and Lloyd index in the period 1994-2009. The hypothesis is that the BRIC countries have shown a growth in the internationalization process, as reflected by increased levels of productive investment in other countries. The research is the application method of the Grubel and Lloyd index in the flows of foreign direct investment and foreign investment by residents abroad in the period 1994-2009, in the BRIC countries. The results showed an increase in the Grubel and Lloyd index related to foreign investment flows, which suggests that companies from the BRIC countries have been increasing productive investment in other countries by the same way that the companies from the advanced economies in order to maximize profits through internationalization.

  7. Improvements in data display

    International Nuclear Information System (INIS)

    Ellis, G.W.

    1979-01-01

    An analog signal processor is described in this patent for connecting a source of analog signals to a cathode ray tube display in order to extend the dynamic range of the display. This has important applications in the field of computerised X-ray tomography since significant medical information, such as tumours in soft tissue, is often represented by minimal level changes in image density. Cathode ray tube displays are limited to approximately 15 intensity levels. Thus if both strong and weak absorption of the X-rays occurs, the dynamic range of the transmitted signals will be too large to permit small variations to be examined directly on a cathode ray display. Present tomographic image reconstruction methods are capable of quantising X-ray absorption density measurements into 256 or more distinct levels and a description is given of the electronics which enables the upper and lower range of intensity levels to be independently set and continuously varied. (UK)

  8. Edge-Based Image Compression with Homogeneous Diffusion

    Science.gov (United States)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  9. Ga droplet morphology on GaAs(001) studied by Lloyd's mirror photoemission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Tang, W X; Jesson, D E; Pavlov, K M; Morgan, M J [School of Physics, Monash University, Victoria 3800 (Australia); Usher, B F [Department of Electronic Engineering, La Trobe University, Victoria 3086 (Australia)

    2009-08-05

    We apply Lloyd's mirror photoemission electron microscopy (PEEM) to study the surface shape of Ga droplets on GaAs(001). An unusual rectangular-based droplet shape is identified and the contact angle is determined in situ. It is shown that quenching does not appreciably affect droplet shape and ex situ measurements of the contact angle by atomic force microscopy are in good agreement with Lloyd's mirror PEEM. Extension of Lloyd's mirror technique to reconstruct general three-dimensional (3D) surface shapes and the potential use of synchrotron radiation to improve vertical resolution is discussed.

  10. Conformal Infinity

    OpenAIRE

    Frauendiener, J?rg

    2000-01-01

    The notion of conformal infinity has a long history within the research in Einstein's theory of gravity. Today, 'conformal infinity' is related to almost all other branches of research in general relativity, from quantisation procedures to abstract mathematical issues to numerical applications. This review article attempts to show how this concept gradually and inevitably evolved from physical issues, namely the need to understand gravitational radiation and isolated systems within the theory...

  11. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  12. Determination of amphetamine-type stimulants in oral fluid by solid-phase microextraction and gas chromatography-mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Daniele Z., E-mail: daniele.dzs@dpf.gov.br [Setor Tecnico-Cientifico, Superintendencia Regional do Departamento de Policia Federal no Rio Grande do Sul, 1365 Ipiranga Avenue, Azenha, Zip Code 90160-093 Porto Alegre, Rio Grande do Sul (Brazil); Programa de Pos-Graduacao em Ciencias Farmaceuticas, Faculdade de Farmacia, Universidade Federal do Rio Grande do Sul, 2752 Ipiranga Avenue, Santana, Zip Code 90610-000 Porto Alegre, Rio Grande do Sul (Brazil); Boehl, Paula O.; Comiran, Eloisa; Mariotti, Kristiane C. [Programa de Pos-Graduacao em Ciencias Farmaceuticas, Faculdade de Farmacia, Universidade Federal do Rio Grande do Sul, 2752 Ipiranga Avenue, Santana, Zip Code 90610-000 Porto Alegre, Rio Grande do Sul (Brazil); Pechansky, Flavio [Centro de Pesquisa em Alcool e Drogas (CPAD), Hospital de Clinicas de Porto Alegre, Universidade Federal do Rio Grande do Sul, 2350, Ramiro Barcelos Street, Zip Code 90035-903 Porto Alegre, Rio Grande do Sul (Brazil); Duarte, Paulina C.A.V. [Secretaria Nacional de Politicas sobre Drogas (SENAD), Esplanada dos Ministerios, Block ' A' , 5th floor, Zip Code 70050-907 Brasilia, Distrito Federal (Brazil); De Boni, Raquel [Centro de Pesquisa em Alcool e Drogas (CPAD), Hospital de Clinicas de Porto Alegre, Universidade Federal do Rio Grande do Sul, 2350, Ramiro Barcelos Street, Zip Code 90035-903 Porto Alegre, Rio Grande do Sul (Brazil); Froehlich, Pedro E.; Limberger, Renata P. [Programa de Pos-Graduacao em Ciencias Farmaceuticas, Faculdade de Farmacia, Universidade Federal do Rio Grande do Sul, 2752 Ipiranga Avenue, Santana, Zip Code 90610-000 Porto Alegre, Rio Grande do Sul (Brazil)

    2011-06-24

    Graphical abstract: Highlights: > Propylchloroformate derivatization of amphetamine-type stimulants in oral fluid. > Direct immersion solid-phase microextraction/gas chromatography-mass spectrometry. > Linear range 2(4)-256 ng mL{sup -1}, detection limits 0.5-2 ng mL{sup -1}. > Accuracy 98-112%, precision <15% of RSD, recovery 77-112%. > Importance of residual evaluation in checking model goodness-of-fit. - Abstract: A method for the simultaneous identification and quantification of amphetamine (AMP), methamphetamine (MET), fenproporex (FEN), diethylpropion (DIE) and methylphenidate (MPH) in oral fluid collected with Quantisal{sup TM} device has been developed and validated. Thereunto, in-matrix propylchloroformate derivatization followed by direct immersion solid-phase microextraction and gas chromatography-mass spectrometry were employed. Deuterium labeled AMP was used as internal standard for all the stimulants and analysis was performed using the selected ion monitoring mode. The detector response was linear for the studied drugs in the concentration range of 2-256 ng mL{sup -1} (neat oral fluid), except for FEN, whereas the linear range was 4-256 ng mL{sup -1}. The detection limits were 0.5 ng mL{sup -1} (MET), 1 ng mL{sup -1} (MPH) and 2 ng mL{sup -1} (DIE, AMP, FEN), respectively. Accuracy of quality control samples remained within 98.2-111.9% of the target concentrations, while precision has not exceeded 15% of the relative standard deviation. Recoveries with Quantisal{sup TM} device ranged from 77.2% to 112.1%. Also, the goodness-of-fit concerning the ordinary least squares model in the statistical inference of data has been tested through residual plotting and ANOVA. The validated method can be easily automated and then used for screening and confirmation of amphetamine-type stimulants in drivers' oral fluid.

  13. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  14. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    Directory of Open Access Journals (Sweden)

    Anne Chao

    Full Text Available Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1 unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2 these measures connect directly to the rich predictive mathematics of information theory; (3 Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4 Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation" between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  15. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    Science.gov (United States)

    Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  16. Coded aperture imaging: the modulation transfer function for uniformly redundant arrays

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1980-01-01

    Coded aperture imaging uses many pinholes to increase the SNR for intrinsically weak sources when the radiation can be neither reflected nor refracted. Effectively, the signal is multiplexed onto an image and then decoded, often by a computer, to form a reconstructed image. We derive the modulation transfer function (MTF) of such a system employing uniformly redundant arrays (URA). We show that the MTF of a URA system is virtually the same as the MTF of an individual pinhole regardless of the shape or size of the pinhole. Thus, only the location of the pinholes is important for optimum multiplexing and decoding. The shape and size of the pinholes can then be selected based on other criteria. For example, one can generate self-supporting patterns, useful for energies typically encountered in the imaging of laser-driven compressions or in soft x-ray astronomy. Such patterns contain holes that are all the same size, easing the etching or plating fabrication efforts for the apertures. A new reconstruction method is introduced called delta decoding. It improves the resolution capabilities of a coded aperture system by mitigating a blur often introduced during the reconstruction step

  17. 75 FR 42445 - Yakov Kobel and Victor Berkovich v. Hapag-Lloyd America, Inc., Limco Logistics, Inc., and...

    Science.gov (United States)

    2010-07-21

    ... FEDERAL MARITIME COMMISSION [Docket No. 10-06] Yakov Kobel and Victor Berkovich v. Hapag-Lloyd America, Inc., Limco Logistics, Inc., and International TLC, Inc.; Notice of filing of complaint and.... (``Hapag-Lloyd''), Limco Logistics, Inc. (``Limco''), and International TLC, Inc. (``Int'l TLC...

  18. Relationship between running kinematic changes and time limit at vVO2max

    Directory of Open Access Journals (Sweden)

    Leonardo De Lucca

    2012-06-01

    Exhaustive running at maximal oxygen uptake velocity (vVO2max can alter running kinematic parameters and increase energy cost along the time. The aims of the present study were to compare characteristics of ankle and knee kinematics during running at vVO2max and to verify the relationship between changes in kinematic variables and time limit (Tlim. Eleven male volunteers, recreational players of team sports, performed an incremental running test until volitional exhaustion to determine vVO2max and a constant velocity test at vVO2max. Subjects were filmed continuously from the left sagittal plane at 210 Hz for further kinematic analysis. The maximal plantar flexion during swing (p<0.01 was the only variable that increased significantly from beginning to end of the run. Increase in ankle angle at contact was the only variable related to Tlim (r=0.64; p=0.035 and explained 34% of the performance in the test. These findings suggest that the individuals under study maintained a stable running style at vVO2max and that increase in plantar flexion explained the performance in this test when it was applied in non-runners.

  19. Further investigation on adaptive search

    Directory of Open Access Journals (Sweden)

    Ming Hong Pi

    2014-05-01

    Full Text Available Adaptive search is one of the fastest fractal compression algorithms and has gained great success in many industrial applications. By substituting the luminance offset by the range block mean, the authors create a completely new version for both the encoding and decoding algorithms. In this paper, theoretically, they prove that the proposed decoding algorithm converges at least as fast as the existing decoding algorithms using the luminance offset. In addition, they prove that the attractor of the decoding algorithm can be represented by a linear combination of range-averaged images. These theorems are very important contributions to the theory and applications of fractal image compression. As a result, the decoding image can be represented as the sum of the DC and AC component images, which is similar with discrete cosine transform or wavelet transform. To further speed up this algorithm and reduce the complexity of range and domain blocks matching, they propose two improvements in this paper, that is, employing the post-quantisation and geometric neighbouring local search to replace the currently used pre-quantisation and the global search, respectively. The corresponding experimental results show the proposed encoding and decoding algorithms can provide a better performance compared with the existing algorithms.

  20. LDPC Codes--Structural Analysis and Decoding Techniques

    Science.gov (United States)

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  1. La arquitectura tardía de Frank Lloyd Wright, el primer maestro moderno

    OpenAIRE

    González Capitel, Antón

    1996-01-01

    1. La arquitectura tardía de Frank Lloyd Wright, el primer maestro moderno l. l. Un Wright renovado. Talleres y residencias l. l. l. Nuevas casas de Wright en su segunda «edad de oro»: un neoplasticismo orgánico en la casa de la Cascada l. l. 2. Geometrías y disposiciones alternativas l. 2. Arquitectura y ciudad orgánicas 1. 2. 1. El desarrollo de la arquitectura orgánica 1. 3. Arquitecturas relacionadas con la de Frank Lloyd Wright entre los emigrados europeos: Schindler y...

  2. DVD Review: "The Silver Fez" Directed by Lloyd Ross (2009 ...

    African Journals Online (AJOL)

    Abstract. Producer: Lloyd Ross, Joelle Chesselet. Sound design: Warrick Sony. Cast: Abubakar Davids, Mogamat Zain Benjamin. Approx. 87 min. Distributor: IRIS. ZAR 129.99. Journal of the Musical Arts in Africa, Volume 9 2012, 89–91 ...

  3. Fractional Calculus and Shannon Wavelet

    Directory of Open Access Journals (Sweden)

    Carlo Cattani

    2012-01-01

    Full Text Available An explicit analytical formula for the any order fractional derivative of Shannon wavelet is given as wavelet series based on connection coefficients. So that for any 2(ℝ function, reconstructed by Shannon wavelets, we can easily define its fractional derivative. The approximation error is explicitly computed, and the wavelet series is compared with Grünwald fractional derivative by focusing on the many advantages of the wavelet method, in terms of rate of convergence.

  4. Non-Abelian Gauge Theory in the Lorentz Violating Background

    Science.gov (United States)

    Ganai, Prince A.; Shah, Mushtaq B.; Syed, Masood; Ahmad, Owais

    2018-03-01

    In this paper, we will discuss a simple non-Abelian gauge theory in the broken Lorentz spacetime background. We will study the partial breaking of Lorentz symmetry down to its sub-group. We will use the formalism of very special relativity for analysing this non-Abelian gauge theory. Moreover, we will discuss the quantisation of this theory using the BRST symmetry. Also, we will analyse this theory in the maximal Abelian gauge.

  5. Isodose distributions and dose uniformity in the Portuguese gamma irradiation facility calculated using the MCNP code

    CERN Document Server

    Oliveira, C

    2001-01-01

    A systematic study of isodose distributions and dose uniformity in sample carriers of the Portuguese Gamma Irradiation Facility was carried out using the MCNP code. The absorbed dose rate, gamma flux per energy interval and average gamma energy were calculated. For comparison purposes, boxes filled with air and 'dummy' boxes loaded with layers of folded and crumpled newspapers to achieve a given value of density were used. The magnitude of various contributions to the total photon spectra, including source-dependent factors, irradiator structures, sample material and other origins were also calculated.

  6. Chain rules for smooth min-and max-entropies

    DEFF Research Database (Denmark)

    Vitanov, Alexande; Dupont-Dupuis, Fréderic; Tomamichel, Marco

    2013-01-01

    The chain rule for the Shannon and von Neumann en- tropy, which relates the total entropy of a system to the entropies of its parts, is of central importance to information theory. Here, we consider the chain rule for the more general smooth min- and max-entropies, used in one-shot in formation...... theory. For these en- tropy measures, the chain rule no longer holds as an equality. How- ever, the standard chain rule for the von Neum ann entropy is re- trieved asymptotically when evaluating the smooth entropies for many identical and independently distributed states....

  7. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  8. Shannon's information is not entropy

    International Nuclear Information System (INIS)

    Schiffer, M.

    1990-01-01

    In this letter we clear up the long-standing misidentification of Shannon's Information with Entropy. We show that Information, in contrast to Entropy, is not invariant under unitary transformations and that these quantities are only equivalent for representations consisting of Hamiltonian eigenstates. We illustrate this fact through a toy system consisting of a harmonic oscillator in a coherent state. It is further proved that the representations which maximize the information are those which are energy-eigenstates. This fact sets the entropy as an upper bound for Shannon's Information. (author)

  9. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  10. Universality and Shannon entropy of codon usage

    CERN Document Server

    Frappat, L; Sciarrino, A; Sorba, Paul

    2003-01-01

    The distribution functions of the codon usage probabilities, computed over all the available GenBank data, for 40 eukaryotic biological species and 5 chloroplasts, do not follow a Zipf law, but are best fitted by the sum of a constant, an exponential and a linear function in the rank of usage. For mitochondriae the analysis is not conclusive. A quantum-mechanics-inspired model is proposed to describe the observed behaviour. These functions are characterized by parameters that strongly depend on the total GC content of the coding regions of biological species. It is predicted that the codon usage is the same in all exonic genes with the same GC content. The Shannon entropy for codons, also strongly depending on the exonic GC content, is computed.

  11. Use of colour for hand-filled form analysis and recognition

    OpenAIRE

    Sherkat, N; Allen, T; Wong, WS

    2005-01-01

    Colour information in form analysis is currently under utilised. As technology has advanced and computing costs have reduced, the processing of forms in colour has now become practicable. This paper describes a novel colour-based approach to the extraction of filled data from colour form images. Images are first quantised to reduce the colour complexity and data is extracted by examining the colour characteristics of the images. The improved performance of the proposed method has been verifie...

  12. Projective Connections and the Algebra of Densities

    International Nuclear Information System (INIS)

    George, Jacob

    2008-01-01

    Projective connections first appeared in Cartan's papers in the 1920's. Since then they have resurfaced periodically in, for example, integrable systems and perhaps most recently in the context of so called projectively equivariant quantisation. We recall the notion of projective connection and describe its relation with the algebra of densities on a manifold. In particular, we construct a Laplace-type operator on functions using a Thomas projective connection and a symmetric contravariant tensor of rank 2 ('upper metric').

  13. What's wrong with anomalous chiral gauge theory?

    International Nuclear Information System (INIS)

    Kieu, T.D.

    1994-05-01

    It is argued on general ground and demonstrated in the particular example of the Chiral Schwinger Model that there is nothing wrong with apparently anomalous chiral gauge theory. If quantised correctly, there should be no gauge anomaly and chiral gauge theory should be renormalisable and unitary, even in higher dimensions and with non-Abelian gauge groups. Furthermore, it is claimed that mass terms for gauge bosons and chiral fermions can be generated without spoiling the gauge invariance. 19 refs

  14. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    Science.gov (United States)

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  15. Uniform sources of ionizing radiation of extended area from radiotoned photographic film

    International Nuclear Information System (INIS)

    Thackray, M.

    1978-01-01

    The technique of toning photographic films, that have been uniformly exposed and developed, with radionuclides to provide uniform sources of ionizing radiation of extended area and their uses in radiography are discussed. The suitability of various radionuclides for uniform-plane sources is considered. (U.K.)

  16. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    OpenAIRE

    Sandeep Kakde; Atish Khobragade; Shrikant Ambatkar; Pranay Nandanwar

    2017-01-01

    For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algori...

  17. Een nieuwe naam voor Arenaria serpyllifolia L. var. macrocarpa Lloyd

    NARCIS (Netherlands)

    Gutermann, W.; Mennema, J.

    1983-01-01

    The name Arenaria serpyllifoiia L. var. macrocarpa Lloyd is illegitimate, because of the existence of the earlier non-synonymous A. serpyllifolia0 macrocarpa Godron. As on the level of variety no other name is available, we call the taxon A renaria serpylli/olia L. var. Iloydn (Jord.) Gutermann et

  18. CONSTRUCTION OF REGULAR LDPC LIKE CODES BASED ON FULL RANK CODES AND THEIR ITERATIVE DECODING USING A PARITY CHECK TREE

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2011-09-01

    Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.

  19. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    International Nuclear Information System (INIS)

    Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.

    2006-01-01

    The measurement of the stationarity of Monte Carlo fission source distributions in k eff calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  20. Monte Carlo simulation of scatter in non-uniform symmetrical attenuating media for point and distributed sources

    International Nuclear Information System (INIS)

    Henry, L.J.; Rosenthal, M.S.

    1992-01-01

    We report results of scatter simulations for both point and distributed sources of 99m Tc in symmetrical non-uniform attenuating media. The simulations utilized Monte Carlo techniques and were tested against experimental phantoms. Both point and ring sources were used inside a 10.5 cm radius acrylic phantom. Attenuating media consisted of combinations of water, ground beef (to simulate muscle mass), air and bone meal (to simulate bone mass). We estimated/measured energy spectra, detector efficiencies and peak height ratios for all cases. In all cases, the simulated spectra agree with the experimentally measured spectra within 2 SD. Detector efficiencies and peak height ratios also are in agreement. The Monte Carlo code is able to properly model the non-uniform attenuating media used in this project. With verification of the simulations, it is possible to perform initial evaluation studies of scatter correction algorithms by evaluating the mechanisms of action of the correction algorithm on the simulated spectra where the magnitude and sources of scatter are known. (author)

  1. QuSpin: a Python package for dynamics and exact diagonalisation of quantum many body systems part I: spin chains

    Directory of Open Access Journals (Sweden)

    Phillip Weinberg, Marin Bukov

    2017-02-01

    Full Text Available We present a new open-source Python package for exact diagonalization and quantum dynamics of spin(-photon chains, called QuSpin, supporting the use of various symmetries in 1-dimension and (imaginary time evolution for chains up to 32 sites in length. The package is well-suited to study, among others, quantum quenches at finite and infinite times, the Eigenstate Thermalisation hypothesis, many-body localisation and other dynamical phase transitions, periodically-driven (Floquet systems, adiabatic and counter-diabatic ramps, and spin-photon interactions. Moreover, QuSpin's user-friendly interface can easily be used in combination with other Python packages which makes it amenable to a high-level customisation. We explain how to use QuSpin using four detailed examples: (i Standard exact diagonalisation of XXZ chain (ii adiabatic ramping of parameters in the many-body localised XXZ model, (iii heating in the periodically-driven transverse-field Ising model in a parallel field, and (iv quantised light-atom interactions: recovering the periodically-driven atom in the semi-classical limit of a static Hamiltonian.

  2. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  3. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  4. Evasive levels in quantisation through wavepacket coupling: a semi-classical investigation

    International Nuclear Information System (INIS)

    Amiot, P.; Giraud, B.

    1984-01-01

    A new method is presented to introduce classical mechanics elements into the problem of obtaining the spectrum of an operator H-circumflex(p-circumflex, q-circumflex). A finite-rank functional space is created by centering complex wavepackets on a discrete number of points on an equi-energy of the classical H(p,q) and by placing real wavepackets in the classically forbidden region. The latter span the active subspace, P, and the former the inactive subspace, Q, for an application of the method of Bloch-Horowitz. A semi-classical study of the Green function in the inactive subspace Q, classically allowed, gives a clear explanation of this phenomenon and sheds new light on the significance of this semi-classical approximation for the propagator. An extension to the problem of barrier penetration is proposed. (author)

  5. A Quantised State Systems Approach for Jacobian Free Extended Kalman Filtering

    DEFF Research Database (Denmark)

    Alminde, Lars; Bendtsen, Jan Dimon; Stoustrup, Jakob

    2007-01-01

    Model based methods for control of intelligent autonomous systems rely on a state estimate being available. One of the most common methods to obtain a state estimate for non-linear systems is the Extended Kalman Filter (EKF) algorithm. In order to apply the EKF an expression must be available...

  6. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  7. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  8. Noncommutative configuration space. Classical and quantum mechanical aspects

    OpenAIRE

    Vanhecke, F. J.; Sigaud, C.; da Silva, A. R.

    2005-01-01

    In this work we examine noncommutativity of position coordinates in classical symplectic mechanics and its quantisation. In coordinates $\\{q^i,p_k\\}$ the canonical symplectic two-form is $\\omega_0=dq^i\\wedge dp_i$. It is well known in symplectic mechanics {\\bf\\cite{Souriau,Abraham,Guillemin}} that the interaction of a charged particle with a magnetic field can be described in a Hamiltonian formalism without a choice of a potential. This is done by means of a modified symplectic two-form $\\ome...

  9. Partial dynamical symmetries in quantal many-body systems

    International Nuclear Information System (INIS)

    Van Isacker, P.

    2001-01-01

    Partial dynamical symmetries are associated with Hamiltonians that are partially solvable. The determination of the properties of a quantal system of N interacting particles moving in an external potential requires the solution of the eigenvalue equation associated with a second-quantised Hamiltonian. In many situations of interest the Hamiltonian commutes with transformations that constitute a symmetry algebra G sym . This characteristic opens a way to find all analytically solvable Hamiltonians. The author gives a brief review of some recent developments

  10. Planck’s radiation law, the light quantum, and the prehistory of indistinguishability in the teaching of quantum mechanics

    International Nuclear Information System (INIS)

    Passon, Oliver; Grebe-Ellis, Johannes

    2017-01-01

    Planck’s law for black-body radiation marks the origin of quantum theory and is discussed in all introductory (or advanced) courses on this subject. However, the question whether Planck really implied quantisation is debated among historians of physics. We present a simplified account of this debate which also sheds light on the issue of indistinguishability and Einstein’s light quantum hypothesis. We suggest that the teaching of quantum mechanics could benefit from including this material beyond the question of historical accuracy. (paper)

  11. Zero-forcing pre-coding for MIMO WiMAX transceivers: Performance analysis and implementation issues

    Science.gov (United States)

    Cattoni, A. F.; Le Moullec, Y.; Sacchi, C.

    Next generation wireless communication networks are expected to achieve ever increasing data rates. Multi-User Multiple-Input-Multiple-Output (MU-MIMO) is a key technique to obtain the expected performance, because such a technique combines the high capacity achievable using MIMO channel with the benefits of space division multiple access. In MU-MIMO systems, the base stations transmit signals to two or more users over the same channel, for this reason every user can experience inter-user interference. This paper provides a capacity analysis of an online, interference-based pre-coding algorithm able to mitigate the multi-user interference of the MU-MIMO systems in the context of a realistic WiMAX application scenario. Simulation results show that pre-coding can significantly increase the channel capacity. Furthermore, the paper presents several feasibility considerations for implementation of the analyzed technique in a possible FPGA-based software defined radio.

  12. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  13. Analysis of Paralleling Limited Capacity Voltage Sources by Projective Geometry Method

    Directory of Open Access Journals (Sweden)

    Alexandr Penin

    2014-01-01

    Full Text Available The droop current-sharing method for voltage sources of a limited capacity is considered. Influence of equalizing resistors and load resistor is investigated on uniform distribution of relative values of currents when the actual loading corresponds to the capacity of a concrete source. Novel concepts for quantitative representation of operating regimes of sources are entered with use of projective geometry method.

  14. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  15. Multi-Level Wavelet Shannon Entropy-Based Method for Single-Sensor Fault Location

    Directory of Open Access Journals (Sweden)

    Qiaoning Yang

    2015-10-01

    Full Text Available In actual application, sensors are prone to failure because of harsh environments, battery drain, and sensor aging. Sensor fault location is an important step for follow-up sensor fault detection. In this paper, two new multi-level wavelet Shannon entropies (multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are defined. They take full advantage of sensor fault frequency distribution and energy distribution across multi-subband in wavelet domain. Based on the multi-level wavelet Shannon entropy, a method is proposed for single sensor fault location. The method firstly uses a criterion of maximum energy-to-Shannon entropy ratio to select the appropriate wavelet base for signal analysis. Then multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are used to locate the fault. The method is validated using practical chemical gas concentration data from a gas sensor array. Compared with wavelet time Shannon entropy and wavelet energy Shannon entropy, the experimental results demonstrate that the proposed method can achieve accurate location of a single sensor fault and has good anti-noise ability. The proposed method is feasible and effective for single-sensor fault location.

  16. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  17. On massive vector bosons and Abelian magnetic monopoles in D = (3 + 1): a possible way to quantize the topological mass parameter

    International Nuclear Information System (INIS)

    Moura-Melo, Winder A.; Panza, N.; Helayel Neto, J.A.

    1998-12-01

    An Abelian gauge model, with vector and 2-form potential; fields linked by a topological mass term that mixes the two Abelian factors, is shown to exhibit Dirac-like magnetic monopoles in the presence of a matter background. In addition, considering a 'non-minimal coupling' between the fermions and the tensor fields, we obtain a generalized quantisation condition that involves, among others, the mass parameter. Also, it is explicitly shown that 1 loop (finite) corrections do no shift the value of such a mass parameter. (author)

  18. Concluding talk

    International Nuclear Information System (INIS)

    Hong-Mo, C.

    1984-05-01

    The concluding talk reviews the present state of knowledge of elementary particle physics, based on the research papers presented at the VII Warsaw symposium. Most of the meeting was devoted to testing the standard electroweak theory and QCD, and these topics were discussed in detail. Other research work on hadron-nucleus collisions, solitons, skyrmion, cluster models, diquarks, infra-red divergences, jet calculus and quantised string was considered. Experimental facilities for future exploration studies, at DESY, CERN and the U.S. were mentioned. (U.K.)

  19. A multi-analyzer crystal spectrometer (MAX) for pulsed neutron sources

    International Nuclear Information System (INIS)

    Tajima, K.; Ishikawa, Y.; Kanai, K.; Windsor, C.G.; Tomiyoshi, S.

    1982-03-01

    The paper describes the principle and initial performance of a multi-analyzer crystal spectrometer (MAX) recently installed at the KENS spallation neutron source at Tsukuba. The spectrometer is able to make time of flight scans along a desired direction in reciprocal space, covering a wide range of the energy transfers corresponding to the fifteen analyzer crystals. The constant Q or constant E modes of operation can be performed. The spectrometer is particularly suited for studying collective excitations such as phonons and magnons to high energy transfers using single crystal samples. (author)

  20. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  1. Long GRBs sources population non-uniformity

    Science.gov (United States)

    Arkhangelskaja, Irene

    Long GRBs observed in the very wide energy band. It is possible to separate two subsets of GRBs with high energy component (E > 500 MeV) presence. First type events energy spectra in low and high energy intervals are similar (as for GRB 021008) and described by Band, power law or broken power law models look like to usual bursts without emission in tens MeV region. For example, Band spectrum of GRB080916C covering 6 orders of magnitude. Second ones contain new additional high energy spectral component (for example, GRB 050525B and GRB 090902B). Both types of GRBs observed since CGRO mission beginning. The low energy precursors existence are typical for all types bursts. Both types of bursts temporal profiles can be similar in the various energy regions during some events or different in other cases. The absence of hard to soft evolution in low energy band and (or) presence of high energy precursors for some events are the special features of second class of GRBs by the results of preliminary data analysis and this facts gives opportunities to suppose differences between these two GRBs subsets sources. Also the results of long GRB redshifts distribution analysis have shown its shape contradiction to uniform population objects one for our Metagalaxy to both total and various redshifts definition methods GRBs sources samples. These evidences allow making preliminary conclusion about non-uniformity of long GRBs sources population.

  2. A new mini-extrapolation chamber for beta source uniformity measurements

    International Nuclear Information System (INIS)

    Oliveira, M.L.; Caldas, L.V.E.

    2006-01-01

    According to recent international recommendations, beta particle sources should be specified in terms of absorbed dose rates to water at the reference point. However, because of the clinical use of these sources, additional information should be supplied in the calibration reports. This additional information include the source uniformity. A new small volume extrapolation chamber was designed and constructed at the Calibration Laboratory at Instituto de Pesquisas Energeticas e Nucleares, IPEN, Brazil, for the calibration of 90 Sr+ 90 Y ophthalmic plaques. This chamber can be used as a primary standard for the calibration of this type of source. Recent additional studies showed the feasibility of the utilization of this chamber to perform source uniformity measurements. Because of the small effective electrode area, it is possible to perform independent measurements by varying the chamber position by small steps. The aim of the present work was to study the uniformity of a 90 Sr+ 90 Y plane ophthalmic plaque utilizing the mini extrapolation chamber developed at IPEN. The uniformity measurements were performed by varying the chamber position by steps of 2 mm in the source central axis (x-and y-directions) and by varying the chamber position off-axis by 3 mm steps. The results obtained showed that this small volume chamber can be used for this purpose with a great advantage: it is a direct method, being unnecessary a previously calibration of the measurement device in relation to a reference instrument, and it provides real -time results, reducing the time necessary for the study and the determination of the uncertainties related to the measurements. (authors)

  3. Digital Documentation of Frank Lloyd Wright's Masterpiece, Fallingwater

    Science.gov (United States)

    Jerome, P.; Emilio, D.

    2017-08-01

    Since 1988, the professional staff of Architectural Preservation Studio (APS) has been involved with the conservation of Frank Lloyd Wright's Fallingwater in Mill Run, PA. Designed and erected from 1935 to 1939 as a weekend home for the Kauffman family, the complex consists of the main house and guest house. After five years of reports and prototype repairs, we produced a two-volume master plan. Using original Frank Lloyd Wright drawings from Avery Library as background drawings, we measured every surface and reproduced the drawings in CAD, also developing elevations of every room. Stone-by-stone drawings of every flagstone floor and terrace scheduled to be lifted were also created using overlapping film photography that was assembled into a photo mosaic. By 2005, we designed, administered and completed a four-phase exterior restoration, with the paint-stripping and repainting of interior rooms being performed during the brief winter period when the building is closed to the public on an ongoing basis. In 2016, we were invited back to the site to review conditions and advise on routine maintenance. At that time we proposed to re-document the buildings, this time using laser-scanning. Laser-scanning of the exterior was performed in May of 2016, and of the interior in March 2017, each over the course of four days. This paper will make a comparison between manual and digital techniques in terms of Fallingwater's documentation.

  4. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  5. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  6. Die ‘vergroening’ van die Christelike godsdiens: Charles Darwin, Pierre Teilhard de Chardin en Lloyd Geering

    Directory of Open Access Journals (Sweden)

    Izak J.J. (Sakkie Spangenberg

    2014-11-01

    Full Text Available The greening of Christianity: Charles Darwin, Pierre Teilhard de Chardin and Lloyd Geering. Since the time of Charles Darwin, evolutionary biology challenged the metanarrative of Christianity which can be summarised as Fall-Redemption-Judgement. Pierre Teilhard de Chardin tried to circumvent these challenges by integrating the traditional Christian doctrines with evolutionary biology. However, he did not succeed since the Catholic Church, time and again, vetoed his theological publications. A number of Protestant theologians promoted his views but even they could not convince ordinary Christians to accept his views. These were too esoteric for Christians. Most of them were convinced that the acceptance of the theory of evolution will eventually undermine their faith. In recent years Lloyd Geering argued a case for the creation of a new narrative in which the Big Bang and the theory of evolution do play a role. He calls it the ‘Greening of Christianity’. This article discusses the metanarrative of Christianity and the challenges the theory of evolution presents before it assesses the views of Pierre Teilhard de Chardin and Lloyd Geering.

  7. Frank Lloyd Wright - väike maja saab suureks : meistriklass / Pille Nagel

    Index Scriptorium Estoniae

    Nagel, Pille

    2006-01-01

    Frank Lloyd Wrighti (1867-1959) eramuloomingust. Preeriamajadest, usonia majadest, krundist ja maastikust algavast arhitektuurist, katustest ja horisontaalsusest, plaanist, tasapindade ja kõrguste muutumistest, akendest, autovarjualustest, materjalidest ja värvidest, privaatsusest ja avatusest, inimlikust mõõtmest jm. 11 värv. vaadet, plaan

  8. Einstein, Podolsky, Rosen, and Shannon

    OpenAIRE

    Peres, Asher

    2003-01-01

    The EPR paradox (1935) is reexamined in the light of Shannon's information theory (1948). The EPR argument did not take into account that the observers' information was localized, like any other physical object.

  9. Class of near-perfect coded apertures

    International Nuclear Information System (INIS)

    Cannon, T.M.; Fenimore, E.E.

    1977-01-01

    Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method. It is shown that the reconstructed image in the URA system contains virtually uniform noise regardless of the structure in the original source. Therefore, the improvement over a single pinhole camera will be relatively larger for the brighter points in the source than for the low intensity points. In the case of a large detector background noise the URA will always do much better than the single pinhole regardless of the structure of the object. In the case of a low detector background noise, the improvement of the URA over the single pinhole will have a lower limit of approximately (1/2f)/sup 1 / 2 / where f is the fraction of the field of view which is uniformly filled by the object

  10. use of the RESRAD-BUILD code to calculate building surface contamination limits

    International Nuclear Information System (INIS)

    Faillace, E.R.; LePoire, D.; Yu, C.

    1996-01-01

    Surface contamination limits in buildings were calculated for 226 Ra, 230 Th, 232 Th, and natural uranium on the basis of 1 mSv y -1 (100 mrem y -1 ) dose limit. The RESRAD-BUILD computer code was used to calculate these limits for two scenarios: building occupancy and building renovation. RESRAD-BUILD is a pathway analysis model designed to evaluate the potential radiological dose incurred by individuals working or living inside a building contaminated with radioactive material. Six exposure pathways are considered in the RESRAD-BUILD code: (1) external exposure directly from the source; (2) external exposure from materials deposited on the floor; (3) external exposure due to air submersion; (4) inhalation of airborne radioactive particles; (5) inhalation of aerosol indoor radon progeny; and (6) inadvertent ingestion of radioactive material, either directly from the sources or from materials deposited on the surfaces. The code models point, line, area, and volume sources and calculates the effects of radiation shielding, building ventilation, and ingrowth of radioactive decay products. A sensitivity analysis was performed to determine how variations in input parameters would affect the surface contamination limits. In most cases considered, inhalation of airborne radioactive particles was the primary exposure pathway. However, the direct external exposure contribution from surfaces contaminated with 226 Ra was in some cases the dominant pathway for building occupancy depending on the room size, ventilation rates, and surface release fractions. The surface contamination limits are most restrictive for 232 Th, followed by 230 Th, natural uranium, and 226 Ra. The results are compared with the surface contamination limits in the Nuclear Regulatory Commission's Regulatory Guide 1.86, which are most restrictive for 226 Ra and 230 Th, followed by 232 Th, and are least restrictive for natural uranium

  11. Particle transport across a circular shear layer with coherent structures

    International Nuclear Information System (INIS)

    Nielsen, A.H.; Lynov, J.P.; Juul Rasmussen, J.

    1998-01-01

    In the study of the dynamics of coherent structures, forced circular shear flows offer many desirable features. The inherent quantisation of circular geometries due to the periodic boundary conditions makes it possible to design experiments in which the spatial and temporal complexity of the coherent structures can be accurately controlled. Experiments on circular shear flows demonstrating the formation of coherent structures have been performed in different physical systems, including quasi-neutral plasmas, non-neutral plasmas and rotating fluids. In this paper we investigate the evolution of such coherent structures by solving the forced incompressible Navier-Stokes equations numerically using a spectral code. The model is formulated in the context of a rotating fluid but apply equally well to low frequency electrostatic oscillations in a homogeneous magnetized plasma. In order to reveal the Lagrangian properties of the flow and in particular to investigate the transport capacity in the shear layer, passive particles are traced by the velocity field. (orig.)

  12. Regaining Weaver and Shannon

    Directory of Open Access Journals (Sweden)

    Gary Genosko

    2008-01-01

    Full Text Available My claim is that communication considered from the standpoint of how it is modeled must not only reckon with Claude E. Shannon and Warren Weaver but regain their pioneering efforts in new ways. I want to regain two neglected features. I signal these ends by simply reversing the order in which their names commonly appear.First, the recontextualization of Shannon and Weaver requires an investigation of the technocultural scene of information ‘handling’ embedded in their groundbreaking postwar labours; not incidentally, it was Harold D. Lasswell, whose work in the 1940s is often linked with Shannon and Weaver’s, who made a point of distinguishing between those who affect the content of messages (controllers as opposed to those who handle without modifying (other than accidentally such messages. Although it will not be possible to maintain such a hard and fast distinction that ignores scenes of encoding and decoding, Lasswell’s (1964: 42-3 examples of handlers include key figures such as ‘dispatchers, linemen, and messengers connected with telegraphic communication’ whose activities will prove to be important for my reading of the Shannon and Weaver essays. Telegraphy and its occupational cultures are the technosocial scenes informing the Shannon and Weaver model.Second, I will pay special attention to Weaver’s contribution, despite a tendency to erase him altogether by means of a general scientific habit of listing the main author first and then attributing authorship only to the first name on the list (although this differs within scientific disciplines, particularly in the health field where the name of the last author is in the lead, so to speak. I begin with a displacement of hierarchy and authority. I am inclined to simply state for those who, in the manner of Sherlock Holmes, ‘know my method’, that I focus my attention on the less well-known half of thinking pairs – on Roger Caillois instead of Georges Bataille, on F

  13. Statistics and the shell model

    International Nuclear Information System (INIS)

    Weidenmueller, H.A.

    1985-01-01

    Starting with N. Bohr's paper on compound-nucleus reactions, we confront regular dynamical features and chaotic motion in nuclei. The shell-model and, more generally, mean-field theories describe average nuclear properties which are thus identified as regular features. The fluctuations about the average show chaotic behaviour of the same type as found in classical chaotic systems upon quantisation. These features are therefore generic and quite independent of the specific dynamics of the nucleus. A novel method to calculate fluctuations is discussed, and the results of this method are described. (orig.)

  14. LWR-core behaviour project

    International Nuclear Information System (INIS)

    Paratte, J.M.

    1982-07-01

    The LWR-Core behaviour project concerns the mathematical simulation of a light water reactor in normal operation (emergency situations excluded). Computational tools are assembled, i.e. programs and libraries of data. These computational tools can likewise be used in nuclear power applications, industry and control applications. The project is divided into three parts: the development and application of calculation methods for quantisation determination of LWR physics; investigation of the behaviour of nuclear fuels under radiation with special attention to higher burnup; simulation of the operating transients of nuclear power stations. (A.N.K.)

  15. Magnetic charge in an octonionic field theory

    International Nuclear Information System (INIS)

    Lassig, C.C.; Jashi, G.C.

    1996-01-01

    The violation of the Jacobi identity by the presence of magnetic charge is accommodated by using an explicitly nonassociative theory of octonionic fields. Lagrangian and Hamiltonian formalisms are constructed, and issues of the quantisation discussed. Finally an extension of these concepts to string theory is contemplated. The two main problems that seems to arise in this octonionic field theory are the difficulty of constructing an appropriate action to suit the desired equations of motion, and the failure to complete a Hamiltonian formalism and hence quantize the theory. 8 refs

  16. On massive vector bosons and Abelian magnetic monopoles in D = (3 + 1): a possible way to quantize the topological mass parameter

    Energy Technology Data Exchange (ETDEWEB)

    Moura-Melo, Winder A.; Panza, N.; Helayel Neto, J.A. [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil)

    1998-12-01

    An Abelian gauge model, with vector and 2-form potential; fields linked by a topological mass term that mixes the two Abelian factors, is shown to exhibit Dirac-like magnetic monopoles in the presence of a matter background. In addition, considering a 'non-minimal coupling' between the fermions and the tensor fields, we obtain a generalized quantisation condition that involves, among others, the mass parameter. Also, it is explicitly shown that 1{sub loop} (finite) corrections do no shift the value of such a mass parameter. (author)

  17. Solitons

    International Nuclear Information System (INIS)

    Bullough, R.K.

    1978-01-01

    Two sorts of solitons are considered - the classical soliton, a solitary wave which shows great stability in collision with other solitary waves, and the quantal, that is quantised, soliton. Solitons as mathematical objects have excited theoreticians because of their wide ranging applications in physics. They appear as solutions of particular nonlinear wave equations which often have a certain universal significance. The importance of solitons in modern physics is discussed with especial reference to; nonlinearity and solitons, the nonlinear Schroedinger equation, the sine-Gordon equation, notional spins and particle physics. (U.K.)

  18. Application of a Lorentz transformation in six dimensions to an extension of dirac equation

    International Nuclear Information System (INIS)

    Sabry, A.A.

    2000-01-01

    On applying a six dimensional Lorentz transformation (by adding three time components to the usual 3 space components) a covariant extended Dirac equation in this six dimensional space is suggested. From the free field solution of the extended equation we obtained four solutions for E , the first component of energy which depends on very big masses M and M 0 . The observed masses of the leptons and their corresponding neutrinos satisfying the same free field equation, can then be obtained on using second quantisation procedures on the field operators

  19. Sharp lower bounds on the extractable randomness from non-uniform sources

    NARCIS (Netherlands)

    Skoric, B.; Obi, C.; Verbitskiy, E.A.; Schoenmakers, B.

    2011-01-01

    Extraction of uniform randomness from (noisy) non-uniform sources is an important primitive in many security applications, e.g. (pseudo-)random number generators, privacy-preserving biometrics, and key storage based on Physical Unclonable Functions. Generic extraction methods exist, using universal

  20. Shannon versus Kullback-Leibler entropies in nonequilibrium random motion

    International Nuclear Information System (INIS)

    Garbaczewski, Piotr

    2005-01-01

    We analyze dynamical properties of the Shannon information entropy of a continuous probability distribution, which is driven by a standard diffusion process. This entropy choice is confronted with another option, employing the conditional Kullback-Leibler entropy. Both entropies discriminate among various probability distributions, either statically or in the time domain. An asymptotic approach towards equilibrium is typically monotonic in terms of the Kullback entropy. The Shannon entropy time rate needs not to be positive and is a sensitive indicator of the power transfer processes (removal/supply) due to an active environment. In the case of Smoluchowski diffusions, the Kullback entropy time rate coincides with the Shannon entropy 'production' rate

  1. Code Forking, Governance, and Sustainability in Open Source Software

    OpenAIRE

    Juho Lindman; Linus Nyman

    2013-01-01

    The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibilit...

  2. LA ESTRUCTURA ORGÁNICA EN LOS RASCACIELOS DE FRANK LLOYD WRIGHT / The organic structure in the skyscrapers of Frank Lloyd Wright

    Directory of Open Access Journals (Sweden)

    Alfonso Diaz Segura, Ricardo Meri de la Maza, Bartolomé Serra Soriano

    2013-05-01

    Full Text Available RESUMEN La estructura de la modernidad, en general, está al servicio de una nueva concepción espacial fluida y continua, condensada en torno al concepto de “planta libre”. Sin embargo, la riqueza y complejidad de articulación que adquieren las plantas, no se observa en la sección de los edificios, especialmente si se trata de rascacielos. El principio de crecimiento por superposición de plantas iguales y el carácter utilitario de la estructura en esta tipología, anularon su carácter iconográfico y su integración espacial. Frank Lloyd Wright, de modo natural, desarrolla una estructura para sus escasos rascacielos que integra espacio y forma, superando así tanto la simplificación funcional de la Escuela de Chicago, como el valor iconográfico de las experiencias europeas.SUMMARY In general, the structure of modernity serves a new, fluid and continuous, spatial conception, condensed around the “free floor” concept. However, the wealth and complexity of articulation that these floors acquire are not seen in the section of the buildings, especially in the case of skyscrapers. The principle of growth by superimposition of equal floors and the utilitarian character of the structure in this type of building, nullify the iconographic character of the structure and its spatial integration. Frank Lloyd Wright develops a structure for his few skyscrapers that integrates space and form in a natural way, thus surpassing both the functional simplification of the Chicago School, and the iconographic value of the European experiences.

  3. Evaluating the Performance of IPTV over Fixed WiMAX

    Science.gov (United States)

    Hamodi, Jamil; Salah, Khaled; Thool, Ravindra

    2013-12-01

    IEEE specifies different modulation techniques for WiMAX; namely, BPSK, QPSK, 16 QAM and 64 QAM. This paper studies the performance of Internet Protocol Television (IPTV) over Fixed WiMAX system considering different combinations of digital modulation. The performance is studied taking into account a number of key system parameters which include the variation in the video coding, path-loss, scheduling service classes different rated codes in FEC channel coding. The performance study was conducted using OPNET simulation. The performance is studied in terms of packet lost, packet jitter delay, end-to-end delay, and network throughput. Simulation results show that higher order modulation and coding schemes (namely, 16 QAM and 64 QAM) yield better performance than that of QPSK.

  4. Computer code determination of tolerable accel current and voltage limits during startup of an 80 kV MFTF sustaining neutral beam source

    International Nuclear Information System (INIS)

    Mayhall, D.J.; Eckard, R.D.

    1979-01-01

    We have used a Lawrence Livermore Laboratory (LLL) version of the WOLF ion source extractor design computer code to determine tolerable accel current and voltage limits during startup of a prototype 80 kV Mirror Fusion Test Facility (MFTF) sustaining neutral beam source. Arc current limits are also estimated. The source extractor has gaps of 0.236, 0.721, and 0.155 cm. The effective ion mass is 2.77 AMU. The measured optimum accel current density is 0.266 A/cm 2 . The gradient grid electrode runs at 5/6 V/sub a/ (accel voltage). The suppressor electrode voltage is zero for V/sub a/ < 3 kV and -3 kV for V/sub a/ greater than or equal to 3 kV. The accel current density for optimum beam divergence is obtained for 1 less than or equal to V/sub a/ less than or equal to 80 kV, as are the beam divergence and emittance

  5. Looking and Learning: The Solomon R. Guggenheim Museum, Frank Lloyd Wright

    Science.gov (United States)

    Vatsky, Sharon

    2007-01-01

    Frank Lloyd Wright was born and raised on the farmlands of Wisconsin. His mother had a vision that her son would become a great architect. Wright was raised with strong guiding principles, a love of nature, a belief in the unity of all things, and a respect for discipline and hard work. He created the philosophy of "organic architecture," which…

  6. On the embedding of quantum field theory on curved spacetimes into loop quantum gravity

    International Nuclear Information System (INIS)

    Stottmeister, Alexander

    2015-01-01

    The main theme of this thesis is an investigation into possible connections between loop quantum gravity and quantum field theory on curved spacetimes: On the one hand, we aim for the formulation of a general framework that allows for a derivation of quantum field theory on curved spacetimes in a semi-classical limit. On the other hand, we discuss representation-theoretical aspects of loop quantum gravity and quantum field theory on curved spacetimes as both of the latter presumably influence each other in the aforesaid semi-classical limit. Regarding the first point, we investigate the possible implementation of the Born-Oppenheimer approximation in the sense of space-adiabatic perturbation theory in models of loop quantum gravity-type. In the course of this, we argue for the need of a Weyl quantisation and an associated symbolic calculus for loop quantum gravity, which we then successfully define, at least to a certain extent. The compactness of the Lie groups, which models a la loop quantum gravity are based on, turns out to be a main obstacle to a fully satisfactory definition of a Weyl quantisation. Finally, we apply our findings to some toy models of linear scalar quantum fields on quantum cosmological spacetimes and discuss the implementation of space-adiabatic perturbation theory therein. In view of the second point, we start with a discussion of the microlocal spectrum condition for quantum fields on curved spacetimes and how it might be translated to a background-independent Hamiltonian quantum theory of gravity, like loop quantum gravity. The relevance of this lies in the fact that the microlocal spectrum condition selects a class of physically relevant states of the quantum matter fields and is, therefore, expected to play an important role in the aforesaid semi-classical limit of gravity-matter systems. Following this, we switch our perspective and analyse the representation theory of loop quantum gravity. We find some intriguing relations between the

  7. Potential of the neutron lloyd's mirror interferometer for the search for new interactions

    Energy Technology Data Exchange (ETDEWEB)

    Pokotilovski, Yu. N., E-mail: pokot@nf.jinr.ru [Joint Institute for Nuclear Research (Russian Federation)

    2013-04-15

    We discuss the potential of the neutron Lloyd's mirror interferometer in a search for new interactions at small scales. We consider three hypothetical interactions that may be tested using the interferometer. The chameleon scalar field proposed to solve the enigma of accelerating expansion of the Universe produces interaction between particles and matter. The axion-like spin-dependent coupling between a neutron and nuclei or/and electrons may result in a P- and T-noninvariant interaction with matter. Hypothetical non-Newtonian gravitational interactions mediates an additional short-range potential between neutrons and bulk matter. These interactions between the neutron and the mirror of a Lloyd-type neutron interferometer cause a phase shift of neutron waves. We estimate the sensitivity and systematic effects of possible experiments.

  8. Shannon information entropy in heavy-ion collisions

    Science.gov (United States)

    Ma, Chun-Wang; Ma, Yu-Gang

    2018-03-01

    The general idea of information entropy provided by C.E. Shannon "hangs over everything we do" and can be applied to a great variety of problems once the connection between a distribution and the quantities of interest is found. The Shannon information entropy essentially quantify the information of a quantity with its specific distribution, for which the information entropy based methods have been deeply developed in many scientific areas including physics. The dynamical properties of heavy-ion collisions (HICs) process make it difficult and complex to study the nuclear matter and its evolution, for which Shannon information entropy theory can provide new methods and observables to understand the physical phenomena both theoretically and experimentally. To better understand the processes of HICs, the main characteristics of typical models, including the quantum molecular dynamics models, thermodynamics models, and statistical models, etc., are briefly introduced. The typical applications of Shannon information theory in HICs are collected, which cover the chaotic behavior in branching process of hadron collisions, the liquid-gas phase transition in HICs, and the isobaric difference scaling phenomenon for intermediate mass fragments produced in HICs of neutron-rich systems. Even though the present applications in heavy-ion collision physics are still relatively simple, it would shed light on key questions we are seeking for. It is suggested to further develop the information entropy methods in nuclear reactions models, as well as to develop new analysis methods to study the properties of nuclear matters in HICs, especially the evolution of dynamics system.

  9. Disorder and Interaction Effects in Quantum Wires

    International Nuclear Information System (INIS)

    Smith, L W; Ritchie, D A; Farrer, I; Griffiths, J P; Jones, G A C; Thomas, K J; Pepper, M

    2012-01-01

    We present conductance measurements of quasi-one-dimensional quantum wires affected by random disorder in a GaAs/AlGaAs heterostructure. In addition to quantised conductance plateaux, we observe structure superimposed on the conductance characteristics when the channel is wide and the density is low. Magnetic field and temperature are varied to characterize the conductance features which depend on the lateral position of the 1D channel formed in a split-gate device. Our results suggest that there is enhanced backscattering in the wide channel limit, which gives rise to quantum interference effects. When the wires are free of disorder and wide, the confinement is weak so that the mutual repulsion of the electrons forces a single row to split into two. The relationship of this topological change to the disorder in the system will be discussed.

  10. A hybrid video compression based on zerotree wavelet structure

    International Nuclear Information System (INIS)

    Kilic, Ilker; Yilmaz, Reyat

    2009-01-01

    A video compression algorithm comparable to the standard techniques at low bit rates is presented in this paper. The overlapping block motion compensation (OBMC) is combined with discrete wavelet transform which followed by Lloyd-Max quantization and zerotree wavelet (ZTW) structure. The novel feature of this coding scheme is the combination of hierarchical finite state vector quantization (HFSVQ) with the ZTW to encode the quantized wavelet coefficients. It is seen that the proposed video encoder (ZTW-HFSVQ) performs better than the MPEG-4 and Zerotree Entropy Coding (ZTE). (author)

  11. Uniform Title in Theory and in Slovenian and Croatian Cataloguing Practice

    Directory of Open Access Journals (Sweden)

    Marija Petek

    2013-09-01

    Full Text Available ABSTRACTPurpose:  The paper investigates the importance and development of uniform title that enables collocation in the library catalogue. Research results on use of uniform titles in two union catalogues, the Slovenian COBISS and the Croatian CROLIST are also presented.Methodology/approach:  Theoretical apects of the uniform title are treated: for the first time by Panizzi, then in the Paris Principles being the basis for the Verona's cataloguing code; in the latest International Cataloguing Principles including conceptual models Functional Requirements for Bibliographic Records (FRBR and Functional Requirements for Authority Data (FRAD; and last but not least in the international cataloguing code Resource Description and Access (RDA. To find out whether the uniform titles are used consistently according to the Verona's cataloguing code and to the requirements of the bibliographic formats COMARC and UNIMARC, the frequency of tags 300 and 500 in bibliographic records is explored.Results:  The research results indicate that the use of uniform titles in COBISS and CROLIST is not satisfactory and that the tags 300 and 500 are often missing in bibliographic recods. In online catalogues a special attention should be given to the uniform title as it is considered an efficient linking device in the catalogue and as it enables collocation.Research limitations:  The research is limited to bibliographic records for translations of works of personal authors and of anonymous works; corporate authors are not included.Originality/practical implications:  Presenting development of the uniform title from the very beginning up to now and the first research on the uniform title in COBISS.

  12. Content Progressive Coding of Limited Bits/pixel Images

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Forchhammer, Søren

    1999-01-01

    A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF.......A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF....

  13. The right-hand side of the Jacobi identity: to be naught or not to be ?

    International Nuclear Information System (INIS)

    Kiselev, Arthemy V

    2016-01-01

    The geometric approach to iterated variations of local functionals -e.g., of the (master-)action functional - resulted in an extension of the deformation quantisation technique to the set-up of Poisson models of field theory. It also allowed of a rigorous proof for the main inter-relations between the Batalin-Vilkovisky (BV) Laplacian Δ and variational Schouten bracket [,]. The ad hoc use of these relations had been a known analytic difficulty in the BV- formalism for quantisation of gauge systems; now achieved, the proof does actually not require the assumption of graded-commutativity. Explained in our previous work, geometry's self- regularisation is rendered by Gel'fand's calculus of singular linear integral operators supported on the diagonal.We now illustrate that analytic technique by inspecting the validity mechanism for the graded Jacobi identity which the variational Schouten bracket does satisfy (whence Δ 2 = 0, i.e., the BV-Laplacian is a differential acting in the algebra of local functionals). By using one tuple of three variational multi-vectors twice, we contrast the new logic of iterated variations - when the right-hand side of Jacobi's identity vanishes altogether - with the old method: interlacing its steps and stops, it could produce some non-zero representative of the trivial class in the top- degree horizontal cohomology. But we then show at once by an elementary counterexample why, in the frames of the old approach that did not rely on Gel'fand's calculus, the BV-Laplacian failed to be a graded derivation of the variational Schouten bracket. (paper)

  14. The right-hand side of the Jacobi identity: to be naught or not to be ?

    Science.gov (United States)

    Kiselev, Arthemy V.

    2016-01-01

    The geometric approach to iterated variations of local functionals -e.g., of the (master-)action functional - resulted in an extension of the deformation quantisation technique to the set-up of Poisson models of field theory. It also allowed of a rigorous proof for the main inter-relations between the Batalin-Vilkovisky (BV) Laplacian Δ and variational Schouten bracket [,]. The ad hoc use of these relations had been a known analytic difficulty in the BV- formalism for quantisation of gauge systems; now achieved, the proof does actually not require the assumption of graded-commutativity. Explained in our previous work, geometry's self- regularisation is rendered by Gel'fand's calculus of singular linear integral operators supported on the diagonal. We now illustrate that analytic technique by inspecting the validity mechanism for the graded Jacobi identity which the variational Schouten bracket does satisfy (whence Δ2 = 0, i.e., the BV-Laplacian is a differential acting in the algebra of local functionals). By using one tuple of three variational multi-vectors twice, we contrast the new logic of iterated variations - when the right-hand side of Jacobi's identity vanishes altogether - with the old method: interlacing its steps and stops, it could produce some non-zero representative of the trivial class in the top- degree horizontal cohomology. But we then show at once by an elementary counterexample why, in the frames of the old approach that did not rely on Gel'fand's calculus, the BV-Laplacian failed to be a graded derivation of the variational Schouten bracket.

  15. Application of Shannon Wavelet Entropy and Shannon Wavelet Packet Entropy in Analysis of Power System Transient Signals

    Directory of Open Access Journals (Sweden)

    Jikai Chen

    2016-12-01

    Full Text Available In a power system, the analysis of transient signals is the theoretical basis of fault diagnosis and transient protection theory. Shannon wavelet entropy (SWE and Shannon wavelet packet entropy (SWPE are powerful mathematics tools for transient signal analysis. Combined with the recent achievements regarding SWE and SWPE, their applications are summarized in feature extraction of transient signals and transient fault recognition. For wavelet aliasing at adjacent scale of wavelet decomposition, the impact of wavelet aliasing is analyzed for feature extraction accuracy of SWE and SWPE, and their differences are compared. Meanwhile, the analyses mentioned are verified by partial discharge (PD feature extraction of power cable. Finally, some new ideas and further researches are proposed in the wavelet entropy mechanism, operation speed and how to overcome wavelet aliasing.

  16. Evaluation of the uniformity of wide circular reference source and application of correction factors

    International Nuclear Information System (INIS)

    Silva Junior, I.A.; Xavier, M.; Siqueira, P.T.D.; Sordi, G.A.A.; Potiens, M.P.A.

    2017-01-01

    In this work the uniformity of wide circular reference sources is evaluated. This kind of reference source is still widely used in Brazil. In previous works wide rectangular reference sources were analyzed and it was shown the importance of the application of correction factors in calibration procedures of radiation monitors. Now a transposition of the methods used formerly is performed, evaluating the uniformities of circular reference sources and calculating the associated correction factors. (author)

  17. Code of practice for the use of sealed radioactive sources in borehole logging (1998)

    International Nuclear Information System (INIS)

    1989-12-01

    The purpose of this code is to establish working practices, procedures and protective measures which will aid in keeping doses, arising from the use of borehole logging equipment containing sealed radioactive sources, to as low as reasonably achievable and to ensure that the dose-equivalent limits specified in the National Health and Medical Research Council s radiation protection standards, are not exceeded. This code applies to all situations and practices where a sealed radioactive source or sources are used through wireline logging for investigating the physical properties of the geological sequence, or any fluids contained in the geological sequence, or the properties of the borehole itself, whether casing, mudcake or borehole fluids. The radiation protection standards specify dose-equivalent limits for two categories: radiation workers and members of the public. 3 refs., tabs., ills

  18. From Shannon to Quantum Information Science

    Indian Academy of Sciences (India)

    dramatically improve the acquisition, transmission, and processing of .... number of dimensions, and has been applied to several walks of life ... The key idea of Shannon is to model communication as .... Let m be the smallest integer not less.

  19. An introduction to the general boundary formulation of quantum field theory

    International Nuclear Information System (INIS)

    Colosi, Daniele

    2015-01-01

    We give a brief introduction to the so-called general boundary formulation (GBF) of quantum theory. This new axiomatic formulation provides a description of the quantum dynamics which is manifestly local and does not rely on a metric background structure for its definition. We present the basic ingredients of the GBF, in particular we review the core axioms that assign algebraic structures to geometric ones, the two quantisation schemes so far developed for the GBF and the probability interpretation which generalizes the standard Born rule. Finally we briefly discuss some of the results obtained studying specific quantum field theories within the GBF. (paper)

  20. Thyristor voltage converter in induction electric drives with microprocessor control

    Energy Technology Data Exchange (ETDEWEB)

    Braslavsky, I.; Zuzev, A.; Shilin, S. [Electric Drive Department, Urals State Technical University, Ekaterinburg (Russian Federation)

    1997-12-31

    The paper consists of some results on developed pulse model of thyristor voltage converter which is one of the most mathematically complicated unit of electric drive. The model structure and model parameter calculating method are represented. The application of the model allows to analyse stability in `locally` by the linear pulse system theory methods with talking into consideration quantise processes within the converter. Such application provides the obtaining higher accurate results comparing with the non-linear system theory approximate methods. Logarithmic frequency characteristics are used to analyse converter dynamic features and they are represented too. (orig.) 4 refs.

  1. Digital fluxgate magnetometer: design notes

    International Nuclear Information System (INIS)

    Belyayev, Serhiy; Ivchenko, Nickolay

    2015-01-01

    We presented an approach to understanding the performance of a fully digital fluxgate magnetometer. All elements of the design are important for the performance of the instrument, and the presence of the digital feed-back loop introduces certain peculiarities affecting the noise and dynamic performance of the instrument. Ultimately, the quantisation noise of the digital to analogue converter is found to dominate the noise of the current design, although noise shaping alleviates its effect to some extent. An example of magnetometer measurements on board a sounding rocket is presented, and ways to further improve the performance of the instrument are discussed. (paper)

  2. Spectral and scattering theory for translation invariant models in quantum field theory

    DEFF Research Database (Denmark)

    Rasmussen, Morten Grud

    This thesis is concerned with a large class of massive translation invariant models in quantum field theory, including the Nelson model and the Fröhlich polaron. The models in the class describe a matter particle, e.g. a nucleon or an electron, linearly coupled to a second quantised massive scalar...... by the physically relevant choices. The translation invariance implies that the Hamiltonian may be decomposed into a direct integral over the space of total momentum where the fixed momentum fiber Hamiltonians are given by , where denotes total momentum and is the Segal field operator. The fiber Hamiltonians...

  3. Cellularity of certain quantum endomorphism algebras

    DEFF Research Database (Denmark)

    Andersen, Henning Haahr; Lehrer, G. I.; Zhang, R.

    Let $\\tA=\\Z[q^{\\pm \\frac{1}{2}}][([d]!)\\inv]$ and let $\\Delta_{\\tA}(d)$ be an integral form of the Weyl module of highest weight $d \\in \\N$ of the quantised enveloping algebra $\\U_{\\tA}$ of $\\fsl_2$. We exhibit for all positive integers $r$ an explicit cellular structure for $\\End...... of endomorphism algebras, and another which relates the multiplicities of indecomposable summands to the dimensions of simple modules for an endomorphism algebra. Our cellularity result then allows us to prove that knowledge of the dimensions of the simple modules of the specialised cellular algebra above...

  4. Digital fluxgate magnetometer: design notes

    Science.gov (United States)

    Belyayev, Serhiy; Ivchenko, Nickolay

    2015-12-01

    We presented an approach to understanding the performance of a fully digital fluxgate magnetometer. All elements of the design are important for the performance of the instrument, and the presence of the digital feed-back loop introduces certain peculiarities affecting the noise and dynamic performance of the instrument. Ultimately, the quantisation noise of the digital to analogue converter is found to dominate the noise of the current design, although noise shaping alleviates its effect to some extent. An example of magnetometer measurements on board a sounding rocket is presented, and ways to further improve the performance of the instrument are discussed.

  5. Pricing early-exercise and discrete barrier options by Shannon wavelet expansions

    NARCIS (Netherlands)

    Maree, S. C.; Ortiz-Gracia, L.; Oosterlee, C. W.

    2017-01-01

    We present a pricing method based on Shannon wavelet expansions for early-exercise and discretely-monitored barrier options under exponential L,vy asset dynamics. Shannon wavelets are smooth, and thus approximate the densities that occur in finance well, resulting in exponential convergence.

  6. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  7. Current limitation and formation of plasma double layers in a non-uniform magnetic field

    International Nuclear Information System (INIS)

    Plamondon, R.; Teichmann, J.; Torven, S.

    1986-07-01

    Formation of strong double layers has been observed experimentally in a magnetised plasma column maintained by a plasma source. The magnetic field is approximately axially homogenous except in a region at the anode where the electric current flows into a magnetic mirror. The double layer has a stationary position only in the region of non-uniform magnetic field or at the aperture separating the source and the plasma column. It is characterized by a negative differential resistance in the current-voltage characteristic of the device. The parameter space,where the double layer exists, has been studied as well as the corresponding potential profiles and fluctuation spectra. The electric current and the axial electric field are oppositely directed between the plasma source and a potential minimum which is formed in the region of inhomogeneous magnetic field. Electron reflection by the resulting potential barrier is found to be an important current limitation mechanism. (authors)

  8. From Shannon to Quantum Information Science

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 2. From Shannon to Quantum Information Science - Ideas and Techniques. Rajiah Simon. General Article Volume 7 Issue 2 February 2002 pp 66-85. Fulltext. Click here to view fulltext PDF. Permanent link:

  9. Assessment of 12 CHF prediction methods, for an axially non-uniform heat flux distribution, with the RELAP5 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrouk, M. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria)], E-mail: m_ferrouk@yahoo.fr; Aissani, S. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria); D' Auria, F.; DelNevo, A.; Salah, A. Bousbia [Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione, Universita di Pisa (Italy)

    2008-10-15

    The present article covers the evaluation of the performance of twelve critical heat flux methods/correlations published in the open literature. The study concerns the simulation of an axially non-uniform heat flux distribution with the RELAP5 computer code in a single boiling water reactor channel benchmark problem. The nodalization scheme employed for the considered particular geometry, as modelled in RELAP5 code, is described. For this purpose a review of critical heat flux models/correlations applicable to non-uniform axial heat profile is provided. Simulation results using the RELAP5 code and those obtained from our computer program, based on three type predictions methods such as local conditions, F-factor and boiling length average approaches were compared.

  10. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  11. Digital logic circuit design with ALTERA MAX+PLUS II

    International Nuclear Information System (INIS)

    Lee, Seung Ho; Park, Yong Su; Park, Gun Jong; Lee, Ju Heon

    2006-09-01

    This book is composed of five parts. The first part has introduction of ALTERA MAX+PLUS II and graphic editor, text editor, compiler, waveform editor simulator and timing analyzer of it. The second part is about direction of digital logic circuit design with training kit. The third part has grammar and practice of VHDL in ALTERA MAX+PLUS II including example and history of VHDL. The fourth part shows the design example of digital logic circuit by VHDL of ALTERA MAX+PLUS II which lists designs of adder and subtractor, code converter, counter, state machine and LCD module. The last part explains design example of digital logic circuit by graphic editor in ALTERA MAX+PLUS II.

  12. An iOS implementation of the Shannon switching game

    OpenAIRE

    Macík, Miroslav

    2013-01-01

    Shannon switching game is a logical graph game for two players. The game was created by American mathematician Claude Shannon. iOS is an operating system designed for iPhone cellular phone, iPod music player and iPad tablet. The thesis describes existing implementations of the game and also specific implementation for iOS operating system created as a part of this work. This implementation allows you to play against virtual opponent and also supports multiplayer game consisting of two players...

  13. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  14. Vallor, Shannon. Technology and the Virtues

    DEFF Research Database (Denmark)

    Friis, Jan Kyrre Berg

    2017-01-01

    Technology and the Virtues is the first analysis of emerging technologies and the role of virtue ethics in an attempt to make us understand the urgency of immediate moral transformation. It is written by phenomenologist and philosopher Shannon Vallor, a William J. Rewak Professor at Santa Clara...

  15. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  16. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  17. The Astrophysics Source Code Library by the numbers

    Science.gov (United States)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  18. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  19. Digital logic circuit design with ALTERA MAX+PLUS II

    International Nuclear Information System (INIS)

    Lee, Seung Ho; Park, Yong Su; Lee, Ju Heon

    2006-03-01

    Contents of this book are the kinds of integrated circuit, design process of integrated circuit, introduction of ALTERA MAX+PLUS II, designing logic circuit with VHDL of ALTERA MAX+PLUS II, grammar and practice of VHDL of ALTERA MAX+PLUS II, design for adder, subtractor, parallel binary subtractor, BCD design, CLA design, code converter design, ALU design, register design, counter design, accumulator design, state machine design, frequency divider design, circuit design with TENMILLION counter, LCD module, circuit design for control the outside RAM in training kit and introduction for HEB-DTK-20K-240/HBE-DTK-IOK.

  20. Initial clinical results for breath-hold CT-based processing of respiratory-gated PET acquisitions

    International Nuclear Information System (INIS)

    Fin, Loic; Daouk, Joel; Morvan, Julie; Esper, Isabelle El; Saidi, Lazhar; Meyer, Marc-Etienne; Bailly, Pascal

    2008-01-01

    Respiratory motion causes uptake in positron emission tomography (PET) images of chest structures to spread out and misregister with the CT images. This misregistration can alter the attenuation correction and thus the quantisation of PET images. In this paper, we present the first clinical results for a respiratory-gated PET (RG-PET) processing method based on a single breath-hold CT (BH-CT) acquisition, which seeks to improve diagnostic accuracy via better PET-to-CT co-registration. We refer to this method as ''CT-based'' RG-PET processing. Thirteen lesions were studied. Patients underwent a standard clinical PET protocol and then the CT-based protocol, which consists of a 10-min List Mode RG-PET acquisition, followed by a shallow end-expiration BH-CT. The respective performances of the CT-based and clinical PET methods were evaluated by comparing the distances between the lesions' centroids on PET and CT images. SUV MAX and volume variations were also investigated. The CT-based method showed significantly lower (p=0.027) centroid distances (mean change relative to the clinical method =-49%; range =-100% to 0%). This led to higher SUV MAX (mean change =+33%; range =-4% to 69%). Lesion volumes were significantly lower (p=0.022) in CT-based PET volumes (mean change =-39%: range =-74% to -1%) compared with clinical ones. A CT-based RG-PET processing method can be implemented in clinical practice with a small increase in radiation exposure. It improves PET-CT co-registration of lung lesions and should lead to more accurate attenuation correction and thus SUV measurement. (orig.)

  1. Coupling n-level Atoms with l-modes of Quantised Light in a Resonator

    International Nuclear Information System (INIS)

    Castaños, O; Cordero, S; Nahmad-Achar, E; López-Peña, R

    2016-01-01

    We study the quantum phase transitions associated to the Hamiltonian of a system of n-level atoms interacting with l modes of electromagnetic radiation in a resonator. The quantum phase diagrams are determined in analytic form by means of a variational procedure where the test function is constructed in terms of a tensorial product of coherent states describing the matter and the radiation field. We demonstrate that the system can be reduced to a set of Dicke models. (paper)

  2. A highly efficient Shannon wavelet inverse Fourier technique for pricing European options

    NARCIS (Netherlands)

    L. Ortiz Gracia (Luis); C.W. Oosterlee (Cornelis)

    2016-01-01

    htmlabstractIn the search for robust, accurate, and highly efficient financial option valuation techniques, we here present the SWIFT method (Shannon wavelets inverse Fourier technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of

  3. WiMax taking wireless to the max

    CERN Document Server

    Pareek, Deepak

    2006-01-01

    With market value expected to reach 5 billion by 2007 and the endorsement of some of the biggest names in telecommunications, World Interoperability for Microwave Access (WiMAX) is poised to change the broadband wireless landscape. But how much of WiMAX's touted potential is merely hype? Now that several pre-WiMAX networks have been deployed, what are the operators saying about QoS and ROI? How and when will device manufacturers integrate WiMAX into their products? What is the business case for using WiMAX rather than any number of other established wireless alternatives?WiMAX: Taking Wireless

  4. Strongly coupled chameleon fields: Possible test with a neutron Lloyd's mirror interferometer

    International Nuclear Information System (INIS)

    Pokotilovski, Yu.N.

    2013-01-01

    The consideration of possible neutron Lloyd's mirror interferometer experiment to search for strongly coupled chameleon fields is presented. The chameleon scalar fields were proposed to explain the acceleration of expansion of the Universe. The presence of a chameleon field results in a change of a particle's potential energy in vicinity of a massive body. This interaction causes a phase shift of neutron waves in the interferometer. The sensitivity of the method is estimated

  5. A Highly Efficient Shannon Wavelet Inverse Fourier Technique for Pricing European Options

    NARCIS (Netherlands)

    Ortiz-Gracia, Luis; Oosterlee, C.W.

    2016-01-01

    In the search for robust, accurate, and highly efficient financial option valuation techniques, we here present the SWIFT method (Shannon wavelets inverse Fourier technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of sharp quantitative error

  6. Present state of the SOURCES computer code

    International Nuclear Information System (INIS)

    Shores, Erik F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  7. Anisotropy in electron-atom collisions

    International Nuclear Information System (INIS)

    Linden van den Heuvel, H.B. van.

    1982-01-01

    Most of the work described in this thesis deals with studies using coincidence experiments, particularly for investigating the electron impact excitation of the 2 1 P and 3 1 D states in helium. A peculiarity is that in the 3 1 D studies the directly emitted 3 1 D → 2 1 P photons are not observed but the 2 1 P → 1 1 S photons resulting from the 3 1 D → 2 1 P → 1 1 S cascade instead. Another interesting point is the choice of the quantisation axis. The author demonstrates that it is of great advantage to take the quantisation axis perpendicular to the scattering plane rather than in the direction of the incident beam, as was done (on historical grounds) in previously reported electron-photon coincidence experiments. Contrary to the incident beam direction the axis perpendicular to the scattering plane really represents an axis of symmetry in the coincidence experiment. In Chapter II the so-called 'parity unfavoured' excitation of the (2p 2 ) 3 P state of helium by electrons is studied. In chapter III the anisotropy parameters for the electron impact excitation of the 2 1 P state of helium in the energy range from 26.6 to 40 eV and in the angular range from 30 0 to 110 0 are determined. Chapter IV contains a description of a scattered electron cascaded-photon coincidence experiment on the electron impact excitation of helium's 3 1 D state. The measurement of complex scattering amplitudes for electron impact excitation of the 3 1 D and 3 1 P states of helium is discussed in Chapter V. (Auth./C.F.)

  8. Second-sound studies of coflow and counterflow of superfluid 4He in channels

    International Nuclear Information System (INIS)

    Varga, Emil; Skrbek, L.; Babuin, Simone

    2015-01-01

    We report a comprehensive study of turbulent superfluid 4 He flow through a channel of square cross section. We study for the first time two distinct flow configurations with the same apparatus: coflow (normal and superfluid components move in the same direction), and counterflow (normal and superfluid components move in opposite directions). We realise also a variation of counterflow with the same relative velocity, but where the superfluid component moves while there is no net flow of the normal component through the channel, i.e., pure superflow. We use the second-sound attenuation technique to measure the density of quantised vortex lines in the temperature range 1.2 K ≲ T ≲ T λ ≈ 2.18 K and for flow velocities from about 1 mm/s up to almost 1 m/s in fully developed turbulence. We find that both the steady-state and temporal decay of the turbulence significantly differ in the three flow configurations, yielding an interesting insight into two-fluid hydrodynamics. In both pure superflow and counterflow, the same scaling of vortex line density with counterflow velocity is observed, L∝V cf 2 , with a pronounced temperature dependence; in coflow instead, the vortex line density scales with velocity as L ∝ V 3/2 and is temperature independent; we provide theoretical explanations for these observations. Further, we develop a new promising technique to use different second-sound resonant modes to probe the spatial distribution of quantised vortices in the direction perpendicular to the flow. Preliminary measurements indicate that coflow is less homogeneous than counterflow/superflow, with a denser concentration of vortices between the centre of the channel and its walls

  9. Quantum circuit behaviour

    International Nuclear Information System (INIS)

    Poulton, D.

    1989-09-01

    Single electron tunnelling in multiply connected weak link systems is considered. Using a second quantised approach the tunnel current, in both normal and superconducting systems, using perturbation theory, is derived. The tunnel currents are determined as a function of an Aharanov-Bohm phase (acquired by the electrons). Using these results, the multiply connected system is then discussed when coupled to a resonant LC circuit. The resulting dynamics of this composite system are then determined. In the superconducting case the results are compared and contrasted with flux mode behaviour seen in large superconducting weak link rings. Systems in which the predicted dynamics may be seen are also discussed. In analogy to the electron tunnelling analysis, the tunnelling of magnetic flux quanta through the weak link is also considered. Here, the voltage across the weak link, due to flux tunnelling, is determined as a function of an externally applied current. This is done for both singly and multiply connected flux systems. The results are compared and contrasted with charge mode behaviour seen in superconducting weak link systems. Finally, the behaviour of simple quantum fluids is considered when subject to an external rotation. Using a microscopic analysis it is found that the microscopic quantum behaviour of the particles is manifest on a macroscopic level. Results are derived for bosonic, fermionic and BCS pair-type systems. The connection between flux quantisation in electromagnetic systems is also made. Using these results, the dynamics of such a quantum fluid is considered when coupled to a rotating torsional oscillator. The results are compared with those found in SQUID devices. A model is also presented which discusses the possible excited state dynamics of such a fluid. (author)

  10. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    Science.gov (United States)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  11. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  12. Rogue trading at Lloyds Bank International, 1974: Operational risk in volatile markets

    OpenAIRE

    Schenk, C

    2017-01-01

    Rogue trading has been a persistent feature of international financial markets over the past thirty years, but there is remarkably little historical treatment of this phenomenon. To begin to fill this gap, evidence from company and official archives is used to expose the anatomy of a rogue trading scandal at Lloyds Bank International in 1974. The rush to internationalize, the conflict between rules and norms, and the failure of internal and external checks all contributed to the largest singl...

  13. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  14. Bit-wise arithmetic coding for data compression

    Science.gov (United States)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  15. Matching with transfer matrices

    International Nuclear Information System (INIS)

    Perez-Alvarez, R.; Velasco, V.R.; Garcia-Moliner, F.; Rodriguez-Coppola, H.

    1987-10-01

    An ABC configuration - which corresponds to various systems of physical interest, such as a barrier or a quantum well - is studied by combining a surface Green function matching analysis of the entire system with a description of the intermediate (B) region in terms of a transfer matrix in the sense of Mora et al. (1985). This hybrid approach proves very useful when it is very difficult to construct the corresponding Green function G B . An application is made to the calculation of quantised subband levels in a parabolic quantum well. Further possibilities of extension of this approach are pointed out. (author). 27 refs, 1 tab

  16. Defects in higher-dimensional quantum field theory. Relations to AdS/CFT-correspondence and Kondo lattices

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, R.

    2007-03-15

    The present work is addressed to defects and boundaries in quantum field theory considering the application to AdS/CFT correspondence. We examine interactions of fermions with spins localised on these boundaries. Therefore, an algebra method is emphasised adding reflection and transmission terms to the canonical quantisation prescription. This method has already been applied to bosons in two space-time dimensions before. We show the possibilities of such reflection-transmission algebras in two, three, and four dimensions. We compare with models of solid state physics as well as with the conformal field theory approach to the Kondo effect. Furthermore, we discuss ansatzes of extensions to lattice structures. (orig.)

  17. Induced gravity in quantum theory in a curved space

    International Nuclear Information System (INIS)

    Etim, E.

    1983-01-01

    The reason for interest in the unorthodox view of first order (about R(x)) gravity as a matter-induced quantum effect is really to find an argument not to quantise it. According to this view quantum gravity should be constructed with an action which is, at least, quadratic in the scalar curvature R(x). Such a theory will not contain a dimensional parameter, like Newton's constant, and would probably be renormalisable. This lecture is intended to acquaint the non-expert with the phenomenon of induction of the scalar curvature term in the matter Lagrangian in a curved space in both relativistic and non-relativistic quantum theories

  18. What if? Exploring the multiverse through Euclidean wormholes

    Science.gov (United States)

    Bouhmadi-López, Mariam; Krämer, Manuel; Morais, João; Robles-Pérez, Salvador

    2017-10-01

    We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era.

  19. What if? Exploring the multiverse through Euclidean wormholes

    International Nuclear Information System (INIS)

    Bouhmadi-Lopez, Mariam; Kraemer, Manuel; Morais, Joao; Robles-Perez, Salvador

    2017-01-01

    We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era. (orig.)

  20. What if? Exploring the multiverse through Euclidean wormholes

    Energy Technology Data Exchange (ETDEWEB)

    Bouhmadi-Lopez, Mariam [University of the Basque Country UPV/EHU, Department of Theoretical Physics, Bilbao (Spain); Ikerbasque, Basque Foundation for Science, Bilbao (Spain); Kraemer, Manuel [University of Szczecin, Institute of Physics, Szczecin (Poland); Morais, Joao [University of the Basque Country UPV/EHU, Department of Theoretical Physics, Bilbao (Spain); Robles-Perez, Salvador [Instituto de Fisica Fundamental, CSIC, Madrid (Spain); Estacion Ecologica de Biocosmologia, Medellin (Spain)

    2017-10-15

    We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era. (orig.)

  1. Verification test calculations for the Source Term Code Package

    International Nuclear Information System (INIS)

    Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs

  2. Shannon entropy: A study of confined hydrogenic-like atoms

    Science.gov (United States)

    Nascimento, Wallas S.; Prudente, Frederico V.

    2018-01-01

    The Shannon entropy in the atomic, molecular and chemical physics context is presented by using as test cases the hydrogenic-like atoms Hc, Hec+ and Lic2 + confined by an impenetrable spherical box. Novel expressions for entropic uncertainty relation and Shannon entropies Sr and Sp are proposed to ensure their physical dimensionless characteristic. The electronic ground state energy and the quantities Sr,Sp and St are calculated for the hydrogenic-like atoms to different confinement radii by using a variational method. The global behavior of these quantities and different conjectures are analyzed. The results are compared, when available, with those previously published.

  3. The local limit of the uniform spanning tree on dense graphs

    Czech Academy of Sciences Publication Activity Database

    Hladký, Jan; Nachmias, A.; Tran, Tuan

    First Online: 10 January (2018) ISSN 0022-4715 R&D Projects: GA ČR GJ16-07822Y Keywords : uniform spanning tree * graph limits * Benjamini-Schramm convergence * graphon * branching process Subject RIV: BA - General Mathematics Impact factor: 1.349, year: 2016

  4. PENGEMBANGAN PERSAMAAN VO2 MAX DAN EVALUASI HR MAX (STUDI AWAL PADA PEKERJA PRIA

    Directory of Open Access Journals (Sweden)

    Purnawan Adi Wicaksono

    2012-10-01

    berganda memberikan hasil persamaan sebagai berikut :VO2 Max = 3,996 - 0,046 usia. Sedangkan untuk evaluasi persamaan HR Max memberikan hasil bahwa persamaan terpilih yang memprediksi nilai HR Max pekerja industri pria Indonesia lebih baik adalah persamaan Tanaka et al. (2001. Penelitian memberikan hasil lain yaitu mencoba untuk mengembangkan persamaan HR Max untuk pekerja industri pria Indonesia. Dengan menggunakan regresi linier berganda memberikan hasil persamaan sebagai berikut: HR Max = 202,71 – 0,541 usia. VO2 Max dan HR Max yang dikaji dapat dijadikan sebagai referensi kriteria justifikasi kemampuan maksimum seseorang sehingga dapat dijadikan sebagai dasar perancangan sistem kerja agar beban kerja yang diterima pekerja tidak melebihi kapasitas maksimumnya. Penelitian yang mengembangkan persamaan prediksi VO2 Max dan evaluasi persamaan HR Max di Indonesia masih terbatas, sehingga dirasa perlu untuk mengembangkan persamaan prediksi VO2 Max dan evaluasi HR Max karena manfaatnya besar bagi dunia industri.. Dalam dunia pendidikan, penelitian kali ini dapat dijadikan sebagai studi awal yang dapat dikembangkan untuk penelitian – penelitian selanjutnya. Kata Kunci : VO2 max, kapasitas aerobik,kapasitas fisik maksimum, model prediksi, evaluasi HR Max   Abstract Maximum physical capacity of a person represented by the maximum oxygen consumption (VO2 Max and the maximum pulse rate (HR Max which gives a maximum of information limits a person's physical ability to do the job. The current study has the objective to find the value of VO2 Max Indonesia for male workers will be developed a prediction equation VO2 Max is approximated by a linear relationship between pulse rate (Heart Rate as that of Astrand (2003, height (Chatterjee et al, 2006 , weight (deceivingly et al, 2008, age (Magrani et al, 2009 and evaluate HR Max Which equation can be applied to approximate the value of the maximum pulse rate of Indonesian workers. Respondents in the study was 12 male industrial

  5. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  6. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  7. Monitoring of zebra mussels in the Shannon-Boyle navigation, other

    OpenAIRE

    Minchin, D.; Lucy, F.; Sullivan, M.

    2002-01-01

    The zebra mussel (Dreissena polymorpha) population has been closely monitored in Ireland following its discovery in 1997. The species has spread from lower Lough Derg, where it was first introduced, to most of the navigable areas of the Shannon and other interconnected navigable waters. This study took place in the summers of 2000 and 2001 and investigated the relative abundance and biomass of zebra mussels found in the main navigations of the Shannon and elsewhere in rivers, canals and lakes...

  8. Uniform physical theory of diffraction equivalent edge currents for implementation in general computer codes

    DEFF Research Database (Denmark)

    Johansen, Peter Meincke

    1996-01-01

    New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finite...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...

  9. Do School Uniforms Fit?

    Science.gov (United States)

    White, Kerry A.

    2000-01-01

    In 1994, Long Beach (California) Unified School District began requiring uniforms in all elementary and middle schools. Now, half of all urban school systems and many suburban schools have uniform policies. Research on uniforms' effectiveness is mixed. Tightened dress codes may be just as effective and less litigious. (MLH)

  10. A method for the preparation of very thin and uniform α-radioactive sources

    International Nuclear Information System (INIS)

    Becerril-Vilchis, A.; Cortes, A.; Dayras, F.; Sanoit, J. de

    1996-01-01

    The method is based on the electrodeposition of α-emitters as hydroxides on stainless steel cathodes rotating at constant angular velocity. A new electrochemical cell, which has been described elsewhere, was designed. This design takes into account the hydrodynamic behaviour of the rotating disc electrode. Electrochemical and physicochemical studies allowed us to predict the best conditions for each α-emitter, in order to obtain very thin and uniform deposits with a minimal current density value. These included determining the dependence of the deposition yield and uniformity on the cathode rotation speed, solution pH, deposition current density and deposition time. Controlling the optimum values of hydrodynamic, electrochemical and physicochemical process conditions then gives reproducible deposition uniformity and yields. The thickness and uniformity of the α-sources were characterised by high resolution alpha spectroscopy with PIPS detectors. These sources are specially suitable for spectroscopic, α-particle emission probability and isotopic ratio studies. Using this method values ≤10 keV for the energy resolution and 100 to 1 for the peak to valley ratio have been obtained. (orig.)

  11. A scanning point source for quality control of FOV uniformity in GC-PET imaging

    International Nuclear Information System (INIS)

    Bergmann, H.; Minear, G.; Dobrozemsky, G.; Nowotny, R.; Koenig, B.

    2002-01-01

    Aim: PET imaging with coincidence cameras (GC-PET) requires additional quality control procedures to check the function of coincidence circuitry and detector zoning. In particular, the uniformity response over the field of view needs special attention since it is known that coincidence counting mode may suffer from non-uniformity effects not present in single photon mode. Materials and methods: An inexpensive linear scanner with a stepper motor and a digital interface to a PC with software allowing versatile scanning modes was developed. The scanner is used with a source holder containing a Sodium-22 point source. While moving the source along the axis of rotation of the GC-PET system, a tomographic acquisition takes place. The scan covers the full axial field of view of the 2-D or 3-D scatter frame. Depending on the acquisition software, point source scanning takes place continuously while only one projection is acquired or is done in step-and-shoot mode with the number of positions equal to the number of gantry steps. Special software was developed to analyse the resulting list mode acquisition files and to produce an image of the recorded coincidence events of each head. Results: Uniformity images of coincidence events were obtained after further correction for systematic sensitivity variations caused by acquisition geometry. The resulting images are analysed visually and by calculating NEMA uniformity indices as for a planar flood field. The method has been applied successfully to two different brands of GC-PET capable gamma cameras. Conclusion: Uniformity of GC-PET can be tested quickly and accurately with a routine QC procedure, using a Sodium-22 scanning point source and an inexpensive mechanical scanning device. The method can be used for both 2-D and 3-D acquisition modes and fills an important gap in the quality control system for GC-PET

  12. 2. From Shannon To Quantum Information Science

    Indian Academy of Sciences (India)

    ... Journals; Resonance – Journal of Science Education; Volume 7; Issue 5. From Shannon to Quantum Information Science - Mixed States. Rajiah Simon. General Article Volume 7 Issue 5 May 2002 pp 16-33 ... Keywords. Mixed states; entanglement witnesses; partial transpose; quantum computers; von Neumann entropy ...

  13. Lloyd's formula in multiple-scattering calculations with finite temperature

    International Nuclear Information System (INIS)

    Zeller, Rudolf

    2005-01-01

    Lloyd's formula is an elegant tool to calculate the number of states directly from the imaginary part of the logarithm of the Korringa-Kohn-Rostoker (KKR) determinant. It is shown how this formula can be used at finite electronic temperatures and how the difficult problem to determine the physically significant correct phase of the complex logarithm can be circumvented by working with the single-valued real part of the logarithm. The approach is based on contour integrations in the complex energy plane and exploits the analytical properties of the KKR Green function and the Fermi-Dirac function. It leads to rather accurate results, which is illustrated by a local-density functional calculation of the temperature dependence of the intrinsic Fermi level in zinc-blende GaN

  14. A highly efficient pricing method for European-style options based on Shannon wavelets

    NARCIS (Netherlands)

    L. Ortiz Gracia (Luis); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractIn the search for robust, accurate and highly efficient financial option valuation techniques, we present here the SWIFT method (Shannon Wavelets Inverse Fourier Technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of sharp quantitative

  15. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  16. Towards Quantifying a Wider Reality: Shannon Exonerata

    Directory of Open Access Journals (Sweden)

    Robert E. Ulanowicz

    2011-10-01

    Full Text Available In 1872 Ludwig von Boltzmann derived a statistical formula to represent the entropy (an apophasis of a highly simplistic system. In 1948 Claude Shannon independently formulated the same expression to capture the positivist essence of information. Such contradictory thrusts engendered decades of ambiguity concerning exactly what is conveyed by the expression. Resolution of widespread confusion is possible by invoking the third law of thermodynamics, which requires that entropy be treated in a relativistic fashion. Doing so parses the Boltzmann expression into separate terms that segregate apophatic entropy from positivist information. Possibly more importantly, the decomposition itself portrays a dialectic-like agonism between constraint and disorder that may provide a more appropriate description of the behavior of living systems than is possible using conventional dynamics. By quantifying the apophatic side of evolution, the Shannon approach to information achieves what no other treatment of the subject affords: It opens the window on a more encompassing perception of reality.

  17. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  18. Monte Carlo modelling of impurity ion transport for a limiter source/sink

    International Nuclear Information System (INIS)

    Stangeby, P.C.; Farrell, C.; Hoskins, S.; Wood, L.

    1988-01-01

    In relating the impurity influx Φ I (0) (atoms per second) into a plasma from the edge to the central impurity ion density n I (0) (ions·m -3 ), it is necessary to know the value of τ I SOL , the average dwell time of impurity ions in the scrape-off layer. It is usually assumed that τ I SOL =L c /c s , the hydrogenic dwell time, where L c is the limiter connection length and c s is the hydrogenic ion acoustic speed. Monte Carlo ion transport results are reported here which show that, for a wall (uniform) influx, τ I SOL is longer than L c /c s , while for a limiter influx it is shorter. Thus for a limiter influx n I (0) is predicted to be smaller than the reference value. Impurities released from the limiter form ever large 'clouds' of successively higher ionization stages. These are reproduced by the Monte Carlo code as are the cloud shapes for a localized impurity injection far from the limiter. (author). 23 refs, 18 figs, 6 tabs

  19. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  20. Towards an information extraction and knowledge formation framework based on Shannon entropy

    Directory of Open Access Journals (Sweden)

    Iliescu Dragoș

    2017-01-01

    Full Text Available Information quantity subject is approached in this paperwork, considering the specific domain of nonconforming product management as information source. This work represents a case study. Raw data were gathered from a heavy industrial works company, information extraction and knowledge formation being considered herein. Involved method for information quantity estimation is based on Shannon entropy formula. Information and entropy spectrum are decomposed and analysed for extraction of specific information and knowledge-that formation. The result of the entropy analysis point out the information needed to be acquired by the involved organisation, this being presented as a specific knowledge type.

  1. Analysis of the minority actinides transmutation in a sodium fast reactor with uniform load pattern by the MCNPX-CINDER code

    International Nuclear Information System (INIS)

    Ochoa Valero, R.; Garcia-Herranz, N.; Aragones, J. M.

    2010-01-01

    The aim of this study is to evaluate the minority actinides transmutation in sodium fast reactors (SFR) assuming a uniform load pattern. It is determined the isotopic evolution of the actinides along burn, and the evolution of the reactivity and the reactivity coefficients. For that, it is used the MCNPX neutron transport code coupled with the inventory code CINDER90.

  2. Tomographical properties of uniformly redundant arrays

    International Nuclear Information System (INIS)

    Cannon, T.M.; Fenimore, E.E.

    1978-01-01

    Recent work in coded aperture imaging has shown that the uniformly redundant array (URA) can image distant planar radioactive sources with no artifacts. The performance of two URA apertures when used in a close-up tomographic imaging system is investigated. It is shown that a URA based on m sequences is superior to one based on quadratic residues. The m sequence array not only produces less obnoxious artifacts in tomographic imaging, but is also more resilient to some described detrimental effects of close-up imaging. It is shown that in spite of these close-up effects, tomographic depth resolution increases as the source is moved closer to the detector

  3. Time limit and time at VO2max' during a continuous and an intermittent run.

    Science.gov (United States)

    Demarie, S; Koralsztein, J P; Billat, V

    2000-06-01

    The purpose of this study was to verify, by track field tests, whether sub-elite runners (n=15) could (i) reach their VO2max while running at v50%delta, i.e. midway between the speed associated with lactate threshold (vLAT) and that associated with maximal aerobic power (vVO2max), and (ii) if an intermittent exercise provokes a maximal and/or supra maximal oxygen consumption longer than a continuous one. Within three days, subjects underwent a multistage incremental test during which their vVO2max and vLAT were determined; they then performed two additional testing sessions, where continuous and intermittent running exercises at v50%delta were performed up to exhaustion. Subject's gas exchange and heart rate were continuously recorded by means of a telemetric apparatus. Blood samples were taken from fingertip and analysed for blood lactate concentration. In the continuous and the intermittent tests peak VO2 exceeded VO2max values, as determined during the incremental test. However in the intermittent exercise, peak VO2, time to exhaustion and time at VO2max reached significantly higher values, while blood lactate accumulation showed significantly lower values than in the continuous one. The v50%delta is sufficient to stimulate VO2max in both intermittent and continuous running. The intermittent exercise results better than the continuous one in increasing maximal aerobic power, allowing longer time at VO2max and obtaining higher peak VO2 with lower lactate accumulation.

  4. BER EVALUATION OF LDPC CODES WITH GMSK IN NAKAGAMI FADING CHANNEL

    Directory of Open Access Journals (Sweden)

    Surbhi Sharma

    2010-06-01

    Full Text Available LDPC codes (Low Density Parity Check Codes have already proved its efficacy while showing its performance near to the Shannon limit. Channel coding schemes are spectrally inefficient as using an unfiltered binary data stream to modulate an RF carrier that will produce an RF spectrum of considerable bandwidth. Techniques have been developed to improve this bandwidth inefficiency or spectral efficiency, and ease detection. GMSK or Gaussian-filtered Minimum Shift Keying uses a Gaussian Filter of an appropriate bandwidth so as to make system spectrally efficient. A Nakagami model provides a better explanation to less and more severe conditions than the Rayleigh and Rician model and provide a better fit to the mobile communication channel data. In this paper we have demonstrated the performance of Low Density Parity Check codes with GMSK modulation (BT product=0.25 technique in Nakagami fading channel. In results it is shown that average bit error rate decreases as the ‘m’ parameter increases (Less fading.

  5. Fast QC-LDPC code for free space optical communication

    Science.gov (United States)

    Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong

    2017-02-01

    Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.

  6. SOURCES-3A: A code for calculating (α, n), spontaneous fission, and delayed neutron sources and spectra

    International Nuclear Information System (INIS)

    Perry, R.T.; Wilson, W.B.; Charlton, W.S.

    1998-04-01

    In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an

  7. Generation of a quantum integrable class of discrete-time or relativistic periodic Toda chains

    International Nuclear Information System (INIS)

    Kundu, Anjan

    1994-01-01

    A new integrable class of quantum models representing a family of different discrete-time or relativistic generalisations of the periodic Toda chain (TC), including that of a recently proposed classical model close to TC [Lett. Math. Phys. 29 (1993) 165] is presented. All such models are shown to be obtainable from a single ancestor model at different realisations of the underlying quantised algebra. As a consequence the 2x2 Lax operators and the associated quantum R-matrices for these models are easily derived ensuring their quantum integrability. It is shown that the functional Bethe ansatz developed for the quantum TC is trivially generalised to achieve separation of variables also for the present models. ((orig.))

  8. Anions, quantum particles in planar systems

    International Nuclear Information System (INIS)

    Monerat, Germano Amaral

    2000-03-01

    Our purpose here is to present a general review of the non-relativistic quantum-mechanical description of excitations that do not obey neither the Fermi-Dirac nor the Bose-Einstein statistics; they rather fulfill an intermediate statistics, the we called 'any-statistics'. As we shall see, this is a peculiarity of (1+1) and (1+2) dimensions, due to the fact that, in two space dimensions, the spin is not quantised, once the rotation group is Abelian. The relevance of studying theories in (1+2) dimensions is justified by the evidence that, in condensed matter physics, there are examples of planar systems, for which everything goes as if the third spatial dimension is frozen. (author)

  9. Anions, quantum particles in planar systems; Anions, particulas quanticas em sistemas planares

    Energy Technology Data Exchange (ETDEWEB)

    Monerat, Germano Amaral [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Inst. de Fisica]. E-mail: monerat@if.uff.br

    2000-03-01

    Our purpose here is to present a general review of the non-relativistic quantum-mechanical description of excitations that do not obey neither the Fermi-Dirac nor the Bose-Einstein statistics; they rather fulfill an intermediate statistics, the we called 'any-statistics'. As we shall see, this is a peculiarity of (1+1) and (1+2) dimensions, due to the fact that, in two space dimensions, the spin is not quantised, once the rotation group is Abelian. The relevance of studying theories in (1+2) dimensions is justified by the evidence that, in condensed matter physics, there are examples of planar systems, for which everything goes as if the third spatial dimension is frozen. (author)

  10. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    Science.gov (United States)

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  11. 78 FR 57633 - Global Link Logistics, Inc., v. Hapag-Lloyd AG; Notice of Filing of Complaint and Assignment

    Science.gov (United States)

    2013-09-19

    ... FEDERAL MARITIME COMMISSION [Docket No. 13-07] Global Link Logistics, Inc., v. Hapag-Lloyd AG; Notice of Filing of Complaint and Assignment Notice is given that a complaint has been filed with the Federal Maritime Commission (Commission) by Global Link Logistics, Inc. (``Global Link''), hereinafter...

  12. Monoparametric family of metrics derived from classical Jensen-Shannon divergence

    Science.gov (United States)

    Osán, Tristán M.; Bussandri, Diego G.; Lamberti, Pedro W.

    2018-04-01

    Jensen-Shannon divergence is a well known multi-purpose measure of dissimilarity between probability distributions. It has been proven that the square root of this quantity is a true metric in the sense that, in addition to the basic properties of a distance, it also satisfies the triangle inequality. In this work we extend this last result to prove that in fact it is possible to derive a monoparametric family of metrics from the classical Jensen-Shannon divergence. Motivated by our results, an application into the field of symbolic sequences segmentation is explored. Additionally, we analyze the possibility to extend this result into the quantum realm.

  13. A short period undulator for MAX

    International Nuclear Information System (INIS)

    Ahola, H.; Meinander, T.

    1992-01-01

    A hybrid undulator for generation of high brilliance synchrotron radiation in the photon energy range of 60--600 eV at the 550 MeV electron storage ring MAX in Lund, Sweden has been designed and built at the Technical Research Centre of Finland in close collaboration with MAX-lab of Lund University. At the rather modest electron energy of MAX this photon energy range can be reached only by an undulator featuring a fairly short period and the smallest possible magnetic gap. Even then, higher harmonics (up to the 13th) of the radiation spectrum must be utilized. An optimization of the magnetic design resulted in a hybrid configuration of NdFeB magnets and soft iron poles with a period of 24 mm and a minimum magnetic gap of 7--10 mm. A variable-gap vacuum chamber allows reduction of the vacuum gap from a maximum of 20 mm, needed for injection, down to 6 mm during stored beam operation. A special design of this chamber permits a magnetic gap between pole tips that is only 1 mm larger than the vacuum gap. Adequate field uniformity was ensured by calibration of magnets to equal strength at their true operating point and verification of the homogeneity of their magnetization. Magnetic measurements included Hall probe scans of the undulator field and flip coil evaluations of the field integral

  14. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  15. New reversing freeform lens design method for LED uniform illumination with extended source and near field

    Science.gov (United States)

    Zhao, Zhili; Zhang, Honghai; Zheng, Huai; Liu, Sheng

    2018-03-01

    In light-emitting diode (LED) array illumination (e.g. LED backlighting), obtainment of high uniformity in the harsh condition of the large distance height ratio (DHR), extended source and near field is a key as well as challenging issue. In this study, we present a new reversing freeform lens design algorithm based on the illuminance distribution function (IDF) instead of the traditional light intensity distribution, which allows uniform LED illumination in the above mentioned harsh conditions. IDF of freeform lens can be obtained by the proposed mathematical method, considering the effects of large DHR, extended source and near field target at the same time. In order to prove the claims, a slim direct-lit LED backlighting with DHR equal to 4 is designed. In comparison with the traditional lenses, illuminance uniformity of LED backlighting with the new lens increases significantly from 0.45 to 0.84, and CV(RMSE) decreases dramatically from 0.24 to 0.03 in the harsh condition. Meanwhile, luminance uniformity of LED backlighting with the new lens is obtained as high as 0.92 at the condition of extended source and near field. This new method provides a practical and effective way to solve the problem of large DHR, extended source and near field for LED array illumination.

  16. Non-uniform dispersion of the source-sink relationship alters wavefront curvature.

    Directory of Open Access Journals (Sweden)

    Lucia Romero

    Full Text Available The distribution of cellular source-sink relationships plays an important role in cardiac propagation. It can lead to conduction slowing and block as well as wave fractionation. It is of great interest to unravel the mechanisms underlying evolution in wavefront geometry. Our goal is to investigate the role of the source-sink relationship on wavefront geometry using computer simulations. We analyzed the role of variability in the microscopic source-sink relationship in driving changes in wavefront geometry. The electrophysiological activity of a homogeneous isotropic tissue was simulated using the ten Tusscher and Panfilov 2006 action potential model and the source-sink relationship was characterized using an improved version of the Romero et al. safety factor formulation (SFm2. Our simulations reveal that non-uniform dispersion of the cellular source-sink relationship (dispersion along the wavefront leads to alterations in curvature. To better understand the role of the source-sink relationship in the process of wave formation, the electrophysiological activity at the initiation of excitation waves in a 1D strand was examined and the source-sink relationship was characterized using the two recently updated safety factor formulations: the SFm2 and the Boyle-Vigmond (SFVB definitions. The electrophysiological activity at the initiation of excitation waves was intimately related to the SFm2 profiles, while the SFVB led to several counterintuitive observations. Importantly, with the SFm2 characterization, a critical source-sink relationship for initiation of excitation waves was identified, which was independent of the size of the electrode of excitation, membrane excitability, or tissue conductivity. In conclusion, our work suggests that non-uniform dispersion of the source-sink relationship alters wavefront curvature and a critical source-sink relationship profile separates wave expansion from collapse. Our study reinforces the idea that the

  17. Properties of classical and quantum Jensen-Shannon divergence

    NARCIS (Netherlands)

    J. Briët (Jop); P. Harremoës (Peter)

    2009-01-01

    htmlabstractJensen-Shannon divergence (JD) is a symmetrized and smoothed version of the most important divergence measure of information theory, Kullback divergence. As opposed to Kullback divergence it determines in a very direct way a metric; indeed, it is the square of a metric. We consider a

  18. Statistical mechanics of error-correcting codes

    Science.gov (United States)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  19. Strongly coupled chameleon fields: Possible test with a neutron Lloyd's mirror interferometer

    Energy Technology Data Exchange (ETDEWEB)

    Pokotilovski, Yu.N., E-mail: pokot@nf.jinr.ru [Joint Institute for Nuclear Research, 141980 Dubna, Moscow Region (Russian Federation)

    2013-02-26

    The consideration of possible neutron Lloyd's mirror interferometer experiment to search for strongly coupled chameleon fields is presented. The chameleon scalar fields were proposed to explain the acceleration of expansion of the Universe. The presence of a chameleon field results in a change of a particle's potential energy in vicinity of a massive body. This interaction causes a phase shift of neutron waves in the interferometer. The sensitivity of the method is estimated.

  20. The development of criteria for limiting the non-stochastic effects of non-uniform skin exposure

    International Nuclear Information System (INIS)

    Charles, M.W.; Wells, J.

    1980-01-01

    The recent recommendations of the International Commission on Radiological Protection (ICRP, 1977) have underlined the lack of knowledge relating to small area skin exposures and have highlighted the difficulties of integrating stochastic and nonstochastic effects into a unified radiation protection philosophy. A system of limitation is suggested which should be appropriate to the wide range of skin irradiation modes which are met in practice. It is proposed for example, that for large area exposures, the probability of skin cancer induction should be considered as the limiting factor. For partial-body skin exposures the probability of the stochastic response will be reduced and late nonstochastic effects will become limiting as the area exposed is reduced. Highly non-uniform exposures such as from small sources or radioactive particulates should be limited on the basis of early rather than late effects. A survey of epidemiological and experimental work is used to show how detailed guidance for limitation in these cases can be provided. Due to the detailed morphology of the skin the biological response depends critically upon the depth dose. In the case of alpha and beta radiation this should be reflected in a less restrictive limitation system, particularly for non-stochastic effects. Up-to-date and on-going experimental studies are described which can provide guidance in this field. (author)

  1. Study and application of the balloon frame system to the industrialization of housing: the case of the American System-Built Houses of Frank Lloyd Wright

    Directory of Open Access Journals (Sweden)

    B. Serra Soriano

    2017-06-01

    Full Text Available Within his large architectural production, Frank Lloyd Wright had the opportunity to experiment with the timber industrialization, linking a traditional material with the modern sense of architecture. Wood and Frank Lloyd Wright are inseparable from the balloon frame system, a system which he will use at his first housing and through which he will materialize the spatial decomposition concept. The research on the particular American System-Built Houses case will serve to show the earliest experiences of Wright with the industry, whose conclusions he would use for subsequent researches on prefabrication.

  2. Lloyd Berkner: Catalyst for Meteorology's Fabulous Fifties

    Science.gov (United States)

    Lewis, J. M.

    2002-05-01

    In the long sweep of meteorological history - from Aristotle's Meteorologica to the threshold of the third millennium - the 1950s will surely be recognized as a defining decade. The contributions of many individuals were responsible for the combination of vision and institution building that marked this decade and set the stage for explosive development during the subsequent forty years. In the minds of many individuals who were active during those early years, however, one name stands out as a prime mover par excellence: Lloyd Viel Berkner. On May 1, 1957, Berkner addressed the National Press Club. The address was entitled, "Horizons of Meteorology". It reveals Berkner's insights into meteorology from his position as Chairman of the Committee on Meteorology of the National Academy of Sciences, soon to release the path-breaking report, Research and Education in Meteorology (1958). The address also reflects the viewpoint of an individual deeply involved in the International Geophysical Year (IGY). It is an important footnote to meteorological history. We welcome this opportunity to profile Berkner and to discuss "Horizons of Meteorology" in light of meteorology's state-of-affairs in the 1950s and the possible relevance to Berkner's ideas to contemporary issues.

  3. OSSMETER D3.4 – Language-Specific Source Code Quality Analysis

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim); H.J.S. Basten (Bas)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and

  4. The discrete-dipole-approximation code ADDA: Capabilities and known limitations

    International Nuclear Information System (INIS)

    Yurkin, Maxim A.; Hoekstra, Alfons G.

    2011-01-01

    The open-source code ADDA is described, which implements the discrete dipole approximation (DDA), a method to simulate light scattering by finite 3D objects of arbitrary shape and composition. Besides standard sequential execution, ADDA can run on a multiprocessor distributed-memory system, parallelizing a single DDA calculation. Hence the size parameter of the scatterer is in principle limited only by total available memory and computational speed. ADDA is written in C99 and is highly portable. It provides full control over the scattering geometry (particle morphology and orientation, and incident beam) and allows one to calculate a wide variety of integral and angle-resolved scattering quantities (cross sections, the Mueller matrix, etc.). Moreover, ADDA incorporates a range of state-of-the-art DDA improvements, aimed at increasing the accuracy and computational speed of the method. We discuss both physical and computational aspects of the DDA simulations and provide a practical introduction into performing such simulations with the ADDA code. We also present several simulation results, in particular, for a sphere with size parameter 320 (100-wavelength diameter) and refractive index 1.05.

  5. Uniform lateral etching of tungsten in deep trenches utilizing reaction-limited NF3 plasma process

    Science.gov (United States)

    Kofuji, Naoyuki; Mori, Masahito; Nishida, Toshiaki

    2017-06-01

    The reaction-limited etching of tungsten (W) with NF3 plasma was performed in an attempt to achieve the uniform lateral etching of W in a deep trench, a capability required by manufacturing processes for three-dimensional NAND flash memory. Reaction-limited etching was found to be possible at high pressures without ion irradiation. An almost constant etching rate that showed no dependence on NF3 pressure was obtained. The effect of varying the wafer temperature was also examined. A higher wafer temperature reduced the threshold pressure for reaction-limited etching and also increased the etching rate in the reaction-limited region. Therefore, the control of the wafer temperature is crucial to controlling the etching amount by this method. We found that the uniform lateral etching of W was possible even in a deep trench where the F radical concentration was low.

  6. Disjointness of Stabilizer Codes and Limitations on Fault-Tolerant Logical Gates

    Science.gov (United States)

    Jochym-O'Connor, Tomas; Kubica, Aleksander; Yoder, Theodore J.

    2018-04-01

    Stabilizer codes are among the most successful quantum error-correcting codes, yet they have important limitations on their ability to fault tolerantly compute. Here, we introduce a new quantity, the disjointness of the stabilizer code, which, roughly speaking, is the number of mostly nonoverlapping representations of any given nontrivial logical Pauli operator. The notion of disjointness proves useful in limiting transversal gates on any error-detecting stabilizer code to a finite level of the Clifford hierarchy. For code families, we can similarly restrict logical operators implemented by constant-depth circuits. For instance, we show that it is impossible, with a constant-depth but possibly geometrically nonlocal circuit, to implement a logical non-Clifford gate on the standard two-dimensional surface code.

  7. Constructive quantum Shannon decomposition from Cartan involutions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, Byron; Love, Peter [Department of Physics, 370 Lancaster Ave., Haverford College, Haverford, PA 19041 (United States)], E-mail: plove@haverford.edu

    2008-10-03

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions.

  8. Constructive quantum Shannon decomposition from Cartan involutions

    International Nuclear Information System (INIS)

    Drury, Byron; Love, Peter

    2008-01-01

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions

  9. Min-Max Spaces and Complexity Reduction in Min-Max Expansions

    Energy Technology Data Exchange (ETDEWEB)

    Gaubert, Stephane, E-mail: Stephane.Gaubert@inria.fr [Ecole Polytechnique, INRIA and CMAP (France); McEneaney, William M., E-mail: wmceneaney@ucsd.edu [University of California San Diego, Dept. of Mech. and Aero. Eng. (United States)

    2012-06-15

    Idempotent methods have been found to be extremely helpful in the numerical solution of certain classes of nonlinear control problems. In those methods, one uses the fact that the value function lies in the space of semiconvex functions (in the case of maximizing controllers), and approximates this value using a truncated max-plus basis expansion. In some classes, the value function is actually convex, and then one specifically approximates with suprema (i.e., max-plus sums) of affine functions. Note that the space of convex functions is a max-plus linear space, or moduloid. In extending those concepts to game problems, one finds a different function space, and different algebra, to be appropriate. Here we consider functions which may be represented using infima (i.e., min-max sums) of max-plus affine functions. It is natural to refer to the class of functions so represented as the min-max linear space (or moduloid) of max-plus hypo-convex functions. We examine this space, the associated notion of duality and min-max basis expansions. In using these methods for solution of control problems, and now games, a critical step is complexity-reduction. In particular, one needs to find reduced-complexity expansions which approximate the function as well as possible. We obtain a solution to this complexity-reduction problem in the case of min-max expansions.

  10. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  11. Using MaxCompiler for High Level Synthesis of Trigger Algorithms

    CERN Document Server

    Summers, Sioni Paris; Sanders, P.

    2017-01-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  12. Time limit and VO2 slow component at intensities corresponding to VO2max in swimmers.

    Science.gov (United States)

    Fernandes, R J; Cardoso, C S; Soares, S M; Ascensão, A; Colaço, P J; Vilas-Boas, J P

    2003-11-01

    The purpose of this study was to measure, in swimming pool conditions and with high level swimmers, the time to exhaustion at the minimum velocity that elicits maximal oxygen consumption (TLim at vVO(2)max), and the corresponding VO(2) slow component (O(2)SC). The vVO(2)max was determined through an intermittent incremental test (n = 15). Forty-eight hours later, TLim was assessed using an all-out swim at vVO(2)max until exhaustion. VO(2) was measured through direct oximetry and the swimming velocity was controlled using a visual light-pacer. Blood lactate concentrations and heart rate values were also measured. Mean VO(2)max for the incremental test was 5.09 +/- 0.53 l/min and the corresponding vVO(2)max was 1.46 +/- 0.06 m/s. Mean TLim value was 260.20 +/- 60.73 s and it was inversely correlated with the velocity of anaerobic threshold (r = -0.54, p energy cost of the respiratory muscles (r = 0.51), for p swimming pool, in high level swimmers performing at vVO(2)max, and that higher TLim seems to correspond to higher expected O(2)SC amplitude. These findings seem to bring new data with application in middle distance swimming.

  13. Shannon entropy and particle decays

    Science.gov (United States)

    Carrasco Millán, Pedro; García-Ferrero, M. Ángeles; Llanes-Estrada, Felipe J.; Porras Riojano, Ana; Sánchez García, Esteban M.

    2018-05-01

    We deploy Shannon's information entropy to the distribution of branching fractions in a particle decay. This serves to quantify how important a given new reported decay channel is, from the point of view of the information that it adds to the already known ones. Because the entropy is additive, one can subdivide the set of channels and discuss, for example, how much information the discovery of a new decay branching would add; or subdivide the decay distribution down to the level of individual quantum states (which can be quickly counted by the phase space). We illustrate the concept with some examples of experimentally known particle decay distributions.

  14. The Shannon entropy as a measure of diffusion in multidimensional dynamical systems

    Science.gov (United States)

    Giordano, C. M.; Cincotta, P. M.

    2018-05-01

    In the present work, we introduce two new estimators of chaotic diffusion based on the Shannon entropy. Using theoretical, heuristic and numerical arguments, we show that the entropy, S, provides a measure of the diffusion extent of a given small initial ensemble of orbits, while an indicator related with the time derivative of the entropy, S', estimates the diffusion rate. We show that in the limiting case of near ergodicity, after an appropriate normalization, S' coincides with the standard homogeneous diffusion coefficient. The very first application of this formulation to a 4D symplectic map and to the Arnold Hamiltonian reveals very successful and encouraging results.

  15. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  16. WiMAX network performance monitoring & optimization

    DEFF Research Database (Denmark)

    Zhang, Qi; Dam, H

    2008-01-01

    frequency reuse, capacity planning, proper network dimensioning, multi-class data services and so on. Furthermore, as a small operator we also want to reduce the demand for sophisticated technicians and man labour hours. To meet these critical demands, we design a generic integrated network performance......In this paper we present our WiMAX (worldwide interoperability for microwave access) network performance monitoring and optimization solution. As a new and small WiMAX network operator, there are many demanding issues that we have to deal with, such as limited available frequency resource, tight...... this integrated network performance monitoring and optimization system in our WiMAX networks. This integrated monitoring and optimization system has such good flexibility and scalability that individual function component can be used by other operators with special needs and more advanced function components can...

  17. Dosimetry studies with 32P source and correlation of skin and eye lens doses

    International Nuclear Information System (INIS)

    Kumar, Munish; Gaonkar, U.P.; Koul, D.K.; Datta, D.; Saxena, S.K.; Kumar, Yogendra; Dash, A.

    2018-01-01

    Beta particles are one of the major contributors toward skin and eye lens doses at facilities handling beta sources. These sources find applications in industry, pharmaceuticals as well as in brachytherapy applications. The beta particles having maximum (E max ) energy > 0.07 MeV are capable of delivering skin dose whereas beta particles having maximum (E max ) energy > 0.7 MeV may also contribute towards dose to eye lens. Studies are performed using 32 P beta source as its maximum beta energy (E max ) is such that for sources having (E max ) of 1.71 MeV or beyond, there can be substantial contribution towards dose to eye lens even the dose limit recommended for skin is followed

  18. Spectrum unfolding, sensitivity analysis and propagation of uncertainties with the maximum entropy deconvolution code MAXED

    CERN Document Server

    Reginatto, M; Neumann, S

    2002-01-01

    MAXED was developed to apply the maximum entropy principle to the unfolding of neutron spectrometric measurements. The approach followed in MAXED has several features that make it attractive: it permits inclusion of a priori information in a well-defined and mathematically consistent way, the algorithm used to derive the solution spectrum is not ad hoc (it can be justified on the basis of arguments that originate in information theory), and the solution spectrum is a non-negative function that can be written in closed form. This last feature permits the use of standard methods for the sensitivity analysis and propagation of uncertainties of MAXED solution spectra. We illustrate its use with unfoldings of NE 213 scintillation detector measurements of photon calibration spectra, and of multisphere neutron spectrometer measurements of cosmic-ray induced neutrons at high altitude (approx 20 km) in the atmosphere.

  19. Computer code FIT

    International Nuclear Information System (INIS)

    Rohmann, D.; Koehler, T.

    1987-02-01

    This is a description of the computer code FIT, written in FORTRAN-77 for a PDP 11/34. FIT is an interactive program to decude position, width and intensity of lines of X-ray spectra (max. length of 4K channels). The lines (max. 30 lines per fit) may have Gauss- or Voigt-profile, as well as exponential tails. Spectrum and fit can be displayed on a Tektronix terminal. (orig.) [de

  20. V/V/sub max/ test for QSOs: comments on the paper by Hawkins and Stewart

    International Nuclear Information System (INIS)

    Wills, D.

    1983-01-01

    Hawkins and Stewart's interpretation of the results of the V/V/sub max/ test for QSO samples is shown to be invalid. Their suggestion that the high values of V/V/sub max/ at small redshifts result from the exclusion of nearby QSOs from the samples is examined quantitatively for the well-studied 3CR sample, and shown to have a negligible effect on the results. Their claim that the values of V/V/sub max/ at larger redshifts indicate a uniform space density of QSOs is not true after an error in their calculations is corrected. Although the V/V/sub max/ test is less sensitive to space density evolution at large redshifts, current results are in good agreement with the hypothesis that a single density evolution law describes the observations out to a redshift of at least 2.5

  1. Shannon's Wayにおける恋と結婚の障害

    OpenAIRE

    中村, 豪; Takeshi, Nakamura; 昭和女子大学英語コミュニケーション学科

    2016-01-01

    The theme of this study is Shannon's Way as love story: the hero and the heroine's love and the obstacles to their marriage. The novel was written by a Scottish writer and physician, A.J.Cronin(in full Archibald Joseph Cronin, 1896-1981)and published in 1948. The setting of the story is chiefly Winton, a fictitious city based on Glasgow. The hero is Robert Shannon, a twenty-four-year-old poor but excellent researcher and doctor whose ambition is to be successful in medical science by a great ...

  2. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories.

  3. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories

  4. Max Jakobson : Kommunismisse tuleb suhtuda objektiivselt / Max Jakobson

    Index Scriptorium Estoniae

    Jakobson, Max, 1923-2013

    2002-01-01

    President Rüütel andis Soome tuntud diplomaadile ja Inimsusevastaste Kuritegude Uurimise Rahvusvahelise Komisjoni (IKURK) esimehele Max Jakobsonile Maarjamaa Risti I klassi teenetemärgi. Tseremooniajärgne intervjuu Max Jakobsoniga

  5. Cinética do consumo de oxigênio e tempo limite na vvo2max: comparação entre homens e mulheres Oxygen uptake kinetics and threshold time at the vVO2max: tomparison between men and women

    Directory of Open Access Journals (Sweden)

    Paulo Henrique Silva Marques de Azevedo

    2010-08-01

    Full Text Available Foi investigada a influência do gênero no tempo limite (Tlim e na cinética do VO2 durante corrida na velocidade associada ao VO2max (vVO2max em nove homens e nove mulheres, todos adultos, jovens e sedentários, com idades entre 20 e 30 anos. Homens e mulheres realizaram dois testes em esteira rolante, sendo um teste incremental para determinar VO2max (42,66 ± 4,50 vs. 32,92 ± 6,03mL.kg-1.min-1 e vVO2max (13.2 ± 1.5 vs. 10,3 ± 2,0km.h-1, respectivamente. Um segundo teste com carga constante na vVO2max até a exaustão. O Tlim e a cinética do VO2 foram determinados. Não houve diferença significante entre homens e mulheres para constante de tempo (τ (35,76 ± 21,03 vs. 36,5 ± 6,21s, respectivamente; P = 0,29; Tlim (308 ± 84,3 vs. 282,11 ± 57,19s, respectivamente; P = 0,68, tempo para atingir o VO2max (TAVO2max (164,48 ± 96,73 vs. 167,88 ± 28,59s, respectivamente; P = 0,29, tempo para atingir o VO2max em percentual do Tlim (%Tlim (50,24 ± 16,93 vs. 62,63 ± 16,60%, respectivamente; P = 0,19, tempo mantido no VO2max (TMVO2max (144,08 ± 42,55 vs. 114,23 ± 76,96s, respectivamente; P = 0,13. Estes resultados sugerem que a cinética do VO2 e o Tlim são similares entre homens e mulheres sedentários na vVO2max.The aim of this study was to investigate the influence of gender on Tthre and VO2 response during running exercise performed at vVO2max. Therefore, eighteen untrained individuals (9 male and 9 female with normal weight and aged between 20 - 30 years (VO2max = 42.66 ± 4.50 vs 32.92 ± 6.03 mL.kg-1.min-1 and vVO2max = 13.2 ± 1.5 vs 10.3 ± 2.0 km.h-1, for male and female, respectively were assessed. Subjects performed two exercise tests on treadmill. First one was an incremental test to determine VO2max, velocity at VO2max (vVO2max and second test was performed at steady velocity - vVO2max - until exhaustion. The threshold time (Tthre and VO2 kinetics response was determined. No significant differences were observed between men

  6. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  7. Quantum Mechanical Enhancement of the Random Dopant Induced Threshold Voltage Fluctuations and Lowering in Sub 0.1 Micron MOSFETs

    Science.gov (United States)

    Asenov, Asen; Slavcheva, G.; Brown, A. R.; Davies, J. H.; Saini, Subhash

    1999-01-01

    A detailed study of the influence of quantum effects in the inversion layer on the random dopant induced threshold voltage fluctuations and lowering in sub 0.1 micron MOSFETs has been performed. This has been achieved using a full 3D implementation of the density gradient (DG) formalism incorporated in our previously published 3D 'atomistic' simulation approach. This results in a consistent, fully 3D, quantum mechanical picture which implies not only the vertical inversion layer quantisation but also the lateral confinement effects manifested by current filamentation in the 'valleys' of the random potential fluctuations. We have shown that the net result of including quantum mechanical effects, while considering statistical fluctuations, is an increase in both threshold voltage fluctuations and lowering.

  8. Super Riemann surfaces

    International Nuclear Information System (INIS)

    Rogers, Alice

    1990-01-01

    A super Riemann surface is a particular kind of (1,1)-dimensional complex analytic supermanifold. From the point of view of super-manifold theory, super Riemann surfaces are interesting because they furnish the simplest examples of what have become known as non-split supermanifolds, that is, supermanifolds where the odd and even parts are genuinely intertwined, as opposed to split supermanifolds which are essentially the exterior bundles of a vector bundle over a conventional manifold. However undoubtedly the main motivation for the study of super Riemann surfaces has been their relevance to the Polyakov quantisation of the spinning string. Some of the papers on super Riemann surfaces are reviewed. Although recent work has shown all super Riemann surfaces are algebraic, some areas of difficulty remain. (author)

  9. System for magnetic monitoring of high voltage motors MM6212

    Directory of Open Access Journals (Sweden)

    Kartalović Nenad

    2012-01-01

    Full Text Available This paper suggests the possibility of diagnosing different types of failure in working induction motors using a device for spectral analysis of axial leakage flux, which has been developed by the Electrotechnical Institute Nikola Tesla. A search coil, which can be of different diameters, was mounted concentrically on the shaft at the rear of the motor. The electromotive force (emf on each of the coils, created as a result of weak magnetic field changes, has a low amplitude. Following the necessary amplifications and adjustments, the signal is digitised on acquisition. In the fast Fourier transform (FFT stage, the sampled and quantised digital time-based signal is converted to a frequency spectrum. Changes in spectral content are used to identify developing faults.

  10. Decoherence can relax cosmic acceleration

    International Nuclear Information System (INIS)

    Markkanen, Tommi

    2016-01-01

    In this work we investigate the semi-classical backreaction for a quantised conformal scalar field and classical vacuum energy. In contrast to the usual approximation of a closed system, our analysis includes an environmental sector such that a quantum-to-classical transition can take place. We show that when the system decoheres into a mixed state with particle number as the classical observable de Sitter space is destabilized, which is observable as a gradually decreasing Hubble rate. In particular we show that at late times this mechanism can drive the curvature of the Universe to zero and has an interpretation as the decay of the vacuum energy demonstrating that quantum effects can be relevant for the fate of the Universe.

  11. Decoherence can relax cosmic acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Markkanen, Tommi [Department of Physics, King’s College London,Strand, London WC2R 2LS (United Kingdom)

    2016-11-11

    In this work we investigate the semi-classical backreaction for a quantised conformal scalar field and classical vacuum energy. In contrast to the usual approximation of a closed system, our analysis includes an environmental sector such that a quantum-to-classical transition can take place. We show that when the system decoheres into a mixed state with particle number as the classical observable de Sitter space is destabilized, which is observable as a gradually decreasing Hubble rate. In particular we show that at late times this mechanism can drive the curvature of the Universe to zero and has an interpretation as the decay of the vacuum energy demonstrating that quantum effects can be relevant for the fate of the Universe.

  12. OSSMETER D3.2 – Report on Source Code Activity Metrics

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been

  13. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  14. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    Science.gov (United States)

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  15. High resolution gamma-ray spectroscopy and the fascinating angular momentum realm of the atomic nucleus

    International Nuclear Information System (INIS)

    Riley, M A; Simpson, J; Paul, E S

    2016-01-01

    In 1974 Aage Bohr and Ben Mottelson predicted the different ‘phases’ that may be expected in deformed nuclei as a function of increasing angular momentum and excitation energy all the way up to the fission limit. While admitting their picture was highly conjectural they confidently stated ‘...with the ingenious experimental approaches that are being developed, we may look forward with excitement to the detailed spectroscopic studies that will illuminate the behaviour of the spinning quantised nucleus’ . High resolution gamma-ray spectroscopy has indeed been a major tool in studying the structure of atomic nuclei and has witnessed numerous significant advances over the last four decades. This article will select highlights from investigations at the Niels Bohr Institute, Denmark, and Daresbury Laboratory, UK, in the late 1970s and early 1980s, some of which have continued at other national laboratories in Europe and the USA to the present day. These studies illustrate the remarkable diversity of phenomena and symmetries exhibited by nuclei in the angular momentum–excitation energy plane that continue to surprise and fascinate scientists. (invited comment)

  16. Relativistic particle in a box: Klein-Gordon versus Dirac equations

    Science.gov (United States)

    Alberto, Pedro; Das, Saurya; Vagenas, Elias C.

    2018-03-01

    The problem of a particle in a box is probably the simplest problem in quantum mechanics which allows for significant insight into the nature of quantum systems and thus is a cornerstone in the teaching of quantum mechanics. In relativistic quantum mechanics this problem allows also to highlight the implications of special relativity for quantum physics, namely the effect that spin has on the quantised energy spectra. To illustrate this point, we solve the problem of a spin zero relativistic particle in a one- and three-dimensional box using the Klein-Gordon equation in the Feshbach-Villars formalism. We compare the solutions and the energy spectra obtained with the corresponding ones from the Dirac equation for a spin one-half relativistic particle. We note the similarities and differences, in particular the spin effects in the relativistic energy spectrum. As expected, the non-relativistic limit is the same for both kinds of particles, since, for a particle in a box, the spin contribution to the energy is a relativistic effect.

  17. A magic mirror - quantum applications of the optical beam splitter

    International Nuclear Information System (INIS)

    Bachor, H.A.

    2000-01-01

    Mirrors are some of the simplest optical components, and their use in optical imaging is well known. They have many other applications, such as the control of laser beams or in optical communication. Indeed they can be found in most optical instruments. It is the partially reflecting mirror, better known as the beam splitter, that is of particular interest to us. It lies at the centre of a number of recent scientific discoveries and technical developments that go beyond the limits of classical optics and make use of the quantum properties of light. In this area Australian and New Zealand researchers have made major contributions in the last two decades. In this paper, the author discusses how a mirror modifies the light itself and the information that can be sent by a beam, and summarise the recent scientific achievements. It combines the idea of photons, where the idea of quantisation is immediately obvious, with the idea of modulating continuous laser beams, which is practical and similar to the engineering description of radio communication

  18. The Schrödinger–Newton equation and its foundations

    International Nuclear Information System (INIS)

    Bahrami, Mohammad; Großardt, André; Donadi, Sandro; Bassi, Angelo

    2014-01-01

    The necessity of quantising the gravitational field is still subject to an open debate. In this paper we compare the approach of quantum gravity, with that of a fundamentally semi-classical theory of gravity, in the weak-field non-relativistic limit. We show that, while in the former case the Schrödinger equation stays linear, in the latter case one ends up with the so-called Schrödinger–Newton equation, which involves a nonlinear, non-local gravitational contribution. We further discuss that the Schrödinger–Newton equation does not describe the collapse of the wave-function, although it was initially proposed for exactly this purpose. Together with the standard collapse postulate, fundamentally semi-classical gravity gives rise to superluminal signalling. A consistent fundamentally semi-classical theory of gravity can therefore only be achieved together with a suitable prescription of the wave-function collapse. We further discuss, how collapse models avoid such superluminal signalling and compare the nonlinearities appearing in these models with those in the Schrödinger–Newton equation. (paper)

  19. High resolution gamma-ray spectroscopy and the fascinating angular momentum realm of the atomic nucleus

    Science.gov (United States)

    Riley, M. A.; Simpson, J.; Paul, E. S.

    2016-12-01

    In 1974 Aage Bohr and Ben Mottelson predicted the different ‘phases’ that may be expected in deformed nuclei as a function of increasing angular momentum and excitation energy all the way up to the fission limit. While admitting their picture was highly conjectural they confidently stated ‘...with the ingenious experimental approaches that are being developed, we may look forward with excitement to the detailed spectroscopic studies that will illuminate the behaviour of the spinning quantised nucleus’. High resolution gamma-ray spectroscopy has indeed been a major tool in studying the structure of atomic nuclei and has witnessed numerous significant advances over the last four decades. This article will select highlights from investigations at the Niels Bohr Institute, Denmark, and Daresbury Laboratory, UK, in the late 1970s and early 1980s, some of which have continued at other national laboratories in Europe and the USA to the present day. These studies illustrate the remarkable diversity of phenomena and symmetries exhibited by nuclei in the angular momentum-excitation energy plane that continue to surprise and fascinate scientists.

  20. Sand Fly Fauna (Diptera, Pcychodidae, Phlebotominae) in Different Leishmaniasis-Endemic Areas of Ecuador, Surveyed Using a Newly Named Mini-Shannon Trap

    Science.gov (United States)

    Hashiguchi, Kazue; Velez N., Lenin; Kato, Hirotomo; Criollo F., Hipatia; Romero A., Daniel; Gomez L., Eduardo; Martini R., Luiggi; Zambrano C., Flavio; Calvopina H., Manuel; Caceres G., Abraham; Hashiguchi, Yoshihisa

    2014-01-01

    To study the sand fly fauna, surveys were performed at four different leishmaniasis-endemic sites in Ecuador from February 2013 to April 2014. A modified and simplified version of the conventional Shannon trap was named “mini-Shannon trap” and put to multiple uses at the different study sites in limited, forested and narrow spaces. The mini-Shannon, CDC light trap and protected human landing method were employed for sand fly collection. The species identification of sand flies was performed mainly based on the morphology of spermathecae and cibarium, after dissection of fresh samples. In this study, therefore, only female samples were used for analysis. A total of 1,480 female sand flies belonging to 25 Lutzomyia species were collected. The number of female sand flies collected was 417 (28.2%) using the mini-Shannon trap, 259 (17.5%) using the CDC light trap and 804 (54.3%) by human landing. The total number of sand flies per trap collected by the different methods was markedly affected by the study site, probably because of the various composition of species at each locality. Furthermore, as an additional study, the attraction of sand flies to mini-Shannon traps powered with LED white-light and LED black-light was investigated preliminarily, together with the CDC light trap and human landing. As a result, a total of 426 sand flies of nine Lutzomyia species, including seven man-biting and two non-biting species, were collected during three capture trials in May and June 2014 in an area endemic for leishmaniasis (La Ventura). The black-light proved relatively superior to the white-light with regard to capture numbers, but no significant statistical difference was observed between the two traps. PMID:25589880

  1. COLLAPSE AND FRAGMENTATION OF MAGNETIC MOLECULAR CLOUD CORES WITH THE ENZO AMR MHD CODE. I. UNIFORM DENSITY SPHERES

    International Nuclear Information System (INIS)

    Boss, Alan P.; Keiser, Sandra A.

    2013-01-01

    Magnetic fields are important contributors to the dynamics of collapsing molecular cloud cores, and can have a major effect on whether collapse results in a single protostar or fragmentation into a binary or multiple protostar system. New models are presented of the collapse of magnetic cloud cores using the adaptive mesh refinement code Enzo2.0. The code was used to calculate the ideal magnetohydrodynamics (MHD) of initially spherical, uniform density, and rotation clouds with density perturbations, i.e., the Boss and Bodenheimer standard isothermal test case for three-dimensional (3D) hydrodynamics codes. After first verifying that Enzo reproduces the binary fragmentation expected for the non-magnetic test case, a large set of models was computed with varied initial magnetic field strengths and directions with respect to the cloud core axis of rotation (parallel or perpendicular), density perturbation amplitudes, and equations of state. Three significantly different outcomes resulted: (1) contraction without sustained collapse, forming a denser cloud core; (2) collapse to form a single protostar with significant spiral arms; and (3) collapse and fragmentation into binary or multiple protostar systems, with multiple spiral arms. Comparisons are also made with previous MHD calculations of similar clouds with a barotropic equations of state. These results for the collapse of initially uniform density spheres illustrate the central importance of both magnetic field direction and field strength for determining the outcome of dynamic protostellar collapse.

  2. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  3. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  4. Remembering Max Boisot

    DEFF Research Database (Denmark)

    Sanchez, Ron

    2013-01-01

    This chapter offers some reflections on Max Boisot and his extraordinary intellect drawn from our 15 years of exchanging and crafting ideas together. I first comment on the process of working with Max, and then suggest some of the remarkable qualities of thought that I believe distinguished Max's...... these qualities of thought are also reflected in Max's individual work and especially in his crowning achievement, the Information-Space Model....

  5. The non-uniformity correction factor for the cylindrical ionization chambers in dosimetry of an HDR 192Ir brachytherapy source

    International Nuclear Information System (INIS)

    Majumdar, Bishnu; Patel, Narayan Prasad; Vijayan, V.

    2006-01-01

    The aim of this study is to derive the non-uniformity correction factor for the two therapy ionization chambers for the dose measurement near the brachytherapy source. The two ionization chambers of 0.6 cc and 0.1 cc volume were used. The measurement in air was performed for distances between 0.8 cm and 20 cm from the source in specially designed measurement jig. The non-uniformity correction factors were derived from the measured values. The experimentally derived factors were compared with the theoretically calculated non-uniformity correction factors and a close agreement was found between these two studies. The experimentally derived non-uniformity correction factor supports the anisotropic theory. (author)

  6. Max-Plus Stochastic Control and Risk-Sensitivity

    International Nuclear Information System (INIS)

    Fleming, Wendell H.; Kaise, Hidehiro; Sheu, Shuenn-Jyi

    2010-01-01

    In the Maslov idempotent probability calculus, expectations of random variables are defined so as to be linear with respect to max-plus addition and scalar multiplication. This paper considers control problems in which the objective is to minimize the max-plus expectation of some max-plus additive running cost. Such problems arise naturally as limits of some types of risk sensitive stochastic control problems. The value function is a viscosity solution to a quasivariational inequality (QVI) of dynamic programming. Equivalence of this QVI to a nonlinear parabolic PDE with discontinuous Hamiltonian is used to prove a comparison theorem for viscosity sub- and super-solutions. An example from mathematical finance is given, and an application in nonlinear H-infinity control is sketched.

  7. AuroraMAX!

    Science.gov (United States)

    Donovan, E.; Spanswick, E. L.; Chicoine, R.; Pugsley, J.; Langlois, P.

    2011-12-01

    AuroraMAX is a public outreach and education initiative that brings auroral images to the public in real time. AuroraMAX utilizes an observing station located just outside Yellowknife, Canada. The station houses a digital All-Sky Imager (ASI) that collects full-colour images of the night sky every six seconds. These images are then transmitted via satellite internet to our web server, where they are made instantly available to the public. Over the last two years this program has rapidly become one of the most successful outreach programs in the history of Space Science in Canada, with hundreds of thousands of distinct visitors to the CSA AuroraMAX website, thousands of followers on social media, and hundreds of newspaper, magazine, radio, and television spots. Over the next few years, the project will expand to include a high-resolution SLR delivering real-time auroral images (also from Yellowknife), as well as a program where astronauts on the ISS will take pictures of the aurora with a handheld SLR. The objectives of AuroraMAX are public outreach and education. The ASI design, operation, and software were based on infrastructure that was developed for the highly successful ASI component of the NASA THEMIS mission as well as the Canadian Space Agency (CSA) Canadian GeoSpace Monitoring (CGSM) program. So from an education and public outreach perspective, AuroraMAX is a single camera operating in the Canadian north. On the other hand, AuroraMAX is one of nearly 40 All-Sky Imagers that are operating across North America. The AuroraMAX camera produces data that is seamlessly integrated with the CGSM ASI data, and made widely available to the Space Science community through open-access web and FTP sites. One of our objectives in the next few years is to incorporate some of the data from the THEMIS and CGSM imagers into the AuroraMAX system, to maximize viewing opportunities and generate more real-time data for public outreach. This is an exemplar of a program that

  8. High-throughput GPU-based LDPC decoding

    Science.gov (United States)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  9. Using MaxCompiler for the high level synthesis of trigger algorithms

    International Nuclear Information System (INIS)

    Summers, S.; Rose, A.; Sanders, P.

    2017-01-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  10. Using MaxCompiler for the high level synthesis of trigger algorithms

    Science.gov (United States)

    Summers, S.; Rose, A.; Sanders, P.

    2017-02-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  11. Inflammatory Modulation Effect of Glycopeptide from Ganoderma capense (Lloyd Teng

    Directory of Open Access Journals (Sweden)

    Yan Zhou

    2014-01-01

    Full Text Available Glycopeptide from Ganoderma capense (Lloyd Teng (GCGP injection is widely used in kinds of immune disorders, but little is known about the molecular mechanisms of how GCGP could interfere with immune cell function. In the present study, we have found that GCGP had inflammatory modulation effects on macrophage cells to maintain NO production and iNOS expression at the normal level. Furthermore, western blot analysis showed that the underlying mechanism of immunomodulatory effect of GCGP involved NF-κB p65 translation, IκB phosphorylation, and degradation; NF-κB inhibitor assays also confirmed the results. In addition, competition study showed that GCGP could inhibit LPS from binding to macrophage cells. Our data indicates that GCGP, which may share the same receptor(s expressed by macrophage cells with LPS, exerted immunomodulatory effect in a NF-κB-dependent signaling pathway in macrophages.

  12. Inflammatory Modulation Effect of Glycopeptide from Ganoderma capense (Lloyd) Teng

    Science.gov (United States)

    Zhou, Yan; Chen, Song; Yao, Wenbing; Gao, Xiangdong

    2014-01-01

    Glycopeptide from Ganoderma capense (Lloyd) Teng (GCGP) injection is widely used in kinds of immune disorders, but little is known about the molecular mechanisms of how GCGP could interfere with immune cell function. In the present study, we have found that GCGP had inflammatory modulation effects on macrophage cells to maintain NO production and iNOS expression at the normal level. Furthermore, western blot analysis showed that the underlying mechanism of immunomodulatory effect of GCGP involved NF-κB p65 translation, IκB phosphorylation, and degradation; NF-κB inhibitor assays also confirmed the results. In addition, competition study showed that GCGP could inhibit LPS from binding to macrophage cells. Our data indicates that GCGP, which may share the same receptor(s) expressed by macrophage cells with LPS, exerted immunomodulatory effect in a NF-κB-dependent signaling pathway in macrophages. PMID:24966469

  13. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  14. Steganography on quantum pixel images using Shannon entropy

    Science.gov (United States)

    Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.

    2016-07-01

    This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.

  15. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  16. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  17. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  18. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  19. Inferring community properties of benthic macroinvertebrates in streams using Shannon index and exergy

    Science.gov (United States)

    Nguyen, Tuyen Van; Cho, Woon-Seok; Kim, Hungsoo; Jung, Il Hyo; Kim, YongKuk; Chon, Tae-Soo

    2014-03-01

    Definition of ecological integrity based on community analysis has long been a critical issue in risk assessment for sustainable ecosystem management. In this work, two indices (i.e., Shannon index and exergy) were selected for the analysis of community properties of benthic macroinvertebrate community in streams in Korea. For this purpose, the means and variances of both indices were analyzed. The results found an extra scope of structural and functional properties in communities in response to environmental variabilities and anthropogenic disturbances. The combination of these two parameters (four indices) was feasible in identification of disturbance agents (e.g., industrial pollution or organic pollution) and specifying states of communities. The four-aforementioned parameters (means and variances of Shannon index and exergy) were further used as input data in a self-organizing map for the characterization of water quality. Our results suggested that Shannon index and exergy in combination could be utilized as a suitable reference system and would be an efficient tool for assessment of the health of aquatic ecosystems exposed to environmental disturbances.

  20. Shannon Entropy in Atoms: A Test for the Assessment of Density Functionals in Kohn-Sham Theory

    Directory of Open Access Journals (Sweden)

    Claudio Amovilli

    2018-05-01

    Full Text Available Electron density is used to compute Shannon entropy. The deviation from the Hartree–Fock (HF of this quantity has been observed to be related to correlation energy. Thus, Shannon entropy is here proposed as a valid quantity to assess the quality of an energy density functional developed within Kohn–Sham theory. To this purpose, results from eight different functionals, representative of Jacob’s ladder, are compared with accurate results obtained from diffusion quantum Monte Carlo (DMC computations. For three series of atomic ions, our results show that the revTPSS and the PBE0 functionals are the best, whereas those based on local density approximation give the largest discrepancy from DMC Shannon entropy.

  1. Integrated Lloyd's mirror on planar waveguide facet as a spectrometer.

    Science.gov (United States)

    Morand, Alain; Benech, Pierre; Gri, Martine

    2017-12-10

    A low-cost and simple Fourier transform spectrometer based on the Lloyd's mirror configuration is proposed in order to have a very stable interferogram. A planar waveguide coupled to a fiber injection is used to spatially disperse the optical beam. A second beam superposed to the previous one is obtained by a total reflection of the incident beam on a vertical glass face integrated in the chip by dicing with a specific circular precision saw. The interferogram at the waveguide output is imaged on a near-infrared camera with an objective lens. The contrast and the fringe period are thus dependent on the type and the fiber position and can be optimized to the pixel size and the length of the camera. Spectral resolution close to λ/Δλ=80 is reached with a camera with 320 pixels of 25 μm width in a wavelength range from O to L bands.

  2. Development of chemical equilibrium analysis code 'CHEEQ'

    International Nuclear Information System (INIS)

    Nagai, Shuichiro

    2006-08-01

    'CHEEQ' code which calculates the partial pressure and the mass of the system consisting of ideal gas and pure condensed phase compounds, was developed. Characteristics of 'CHEEQ' code are as follows. All the chemical equilibrium equations were described by the formation reactions from the mono-atomic gases in order to simplify the code structure and input preparation. Chemical equilibrium conditions, Σν i μ i =0 for the gaseous compounds and precipitated condensed phase compounds and Σν i μ i > 0 for the non-precipitated condensed phase compounds, were applied. Where, ν i and μ i are stoichiometric coefficient and chemical potential of component i. Virtual solid model was introduced to perform the calculation of constant partial pressure condition. 'CHEEQ' was consisted of following 3 parts, (1) analysis code, zc132. f. (2) thermodynamic data base, zmdb01 and (3) input data file, zindb. 'CHEEQ' code can calculate the system which consisted of elements (max.20), condensed phase compounds (max.100) and gaseous compounds. (max.200). Thermodynamic data base, zmdb01 contains about 1000 elements and compounds, and 200 of them were Actinide elements and their compounds. This report describes the basic equations, the outline of the solution procedure and instructions to prepare the input data and to evaluate the calculation results. (author)

  3. Test of Effective Solid Angle code for the efficiency calculation of volume source

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.

  4. Non-uniform dwell times in line source high dose rate brachytherapy: physical and radiobiological considerations

    International Nuclear Information System (INIS)

    Jones, B.; Tan, L.T.; Freestone, G.; Bleasdale, C.; Myint, S.; Littler, J.

    1994-01-01

    The ability to vary source dwell times in high dose rate (HDR) brachytherapy allows for the use of non-uniform dwell times along a line source. This may have advantages in the radical treatment of tumours depending on individual tumour geometry. This study investigates the potential improvements in local tumour control relative to adjacent normal tissue isoeffects when intratumour source dwell times are increased along the central portion of a line source (technique A) in radiotherapy schedules which include a relatively small component of HDR brachytherapy. Such a technique is predicted to increase the local control for tumours of diameters ranging between 2 cm and 4 cm by up to 11% compared with a technique in which there are uniform dwell times along the line source (technique B). There is no difference in the local control rates for the two techniques when used to treat smaller tumours. Normal tissue doses are also modified by the technique used. Technique A produces higher normal tissue doses at points perpendicular to the centre of the line source and lower dose at points nearer the ends of the line source if the prescription point is not in the central plane of the line source. Alternatively, if the dose is prescribed at a point in the central plane of the line source, the dose at all the normal tissue points are lower when technique A is used. (author)

  5. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  6. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  7. Soft-Decision-Data Reshuffle to Mitigate Pulsed Radio Frequency Interference Impact on Low-Density-Parity-Check Code Performance

    Science.gov (United States)

    Ni, Jianjun David

    2011-01-01

    This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.

  8. Quantum vacuum energy near a black hole: the Maxwell field

    International Nuclear Information System (INIS)

    Elster, T.

    1984-01-01

    A quantised Maxwell field is considered propagating in the gravitational field of a Schwarzschild black hole. The vector Hartle-Hawking propagator is defined on the Riemannian section of the analytically continued space-time and expanded in terms of four-dimensional vector spherical harmonics. The equations for the radial functions appearing in the expansion are derived for both odd and even parity. Using the expansion of the vector Hartle-Hawking propagator, the point-separated expectation value of the Maxwellian energy-momentum tensor in the Hartle-Hawking vacuum is derived. The renormalised values of radial pressure, tangential pressure and energy density are obtained near the horizon of the black hole. In contrast to the scalar field, the Maxwell field exhibits a positive energy density near the horizon in the Hartle-Hawking vacuum state. (author)

  9. Lecture notes: string theory and zeta-function

    Energy Technology Data Exchange (ETDEWEB)

    Toppan, Francesco [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil). E-mail: toppan@cbpf.br

    2001-11-01

    These lecture notes are based on a revised and LaTexed version of the Master thesis defended at ISAS. The research part being omitted, they included a review of the bosonic closed string a la Polyakov and of the one-loop background field method of quantisation defined through the zeta-function. In an appendix some basic features of the Riemann zeta-function are also reviewed. The pedagogical aspects of the material here presented are particularly emphasized. These notes are used, together with the Scherk's article in Rev. Mod. Phys. and the first volume of the Polchinski book, for the mini-course on String Theory (16-hours of lectures) held at CBPF. In this course the Green-Schwarz-Witten two-volumes book is also used for consultative purposes. (author)

  10. AdS3 xw (S3 x S3 x S1) solutions of type IIB string theory

    International Nuclear Information System (INIS)

    Donos, Aristomenis; Gauntlett, Jerome P.; Imperial College, London; Sparks, James

    2008-10-01

    We analyse a recently constructed class of local solutions of type IIB supergravity that consist of a warped product of AdS 3 with a sevendimensional internal space. In one duality frame the only other nonvanishing fields are the NS three-form and the dilaton. We analyse in detail how these local solutions can be extended to globally well-defined solutions of type IIB string theory, with the internal space having topology S 3 x S 3 x S 1 and with properly quantised three-form flux. We show that many of the dual (0,2) SCFTs are exactly marginal deformations of the (0,2) SCFTs whose holographic duals are warped products of AdS 3 with seven-dimensional manifolds of topology S 3 x S 2 x T 2 . (orig.)

  11. Massive scalar field evolution in de Sitter

    Energy Technology Data Exchange (ETDEWEB)

    Markkanen, Tommi [Department of Physics, King’s College London,Strand, London WC2R 2LS (United Kingdom); Rajantie, Arttu [Department of Physics, Imperial College London,London SW7 2AZ (United Kingdom)

    2017-01-30

    The behaviour of a massive, non-interacting and non-minimally coupled quantised scalar field in an expanding de Sitter background is investigated by solving the field evolution for an arbitrary initial state. In this approach there is no need to choose a vacuum in order to provide a definition for particle states, nor to introduce an explicit ultraviolet regularization. We conclude that the expanding de Sitter space is a stable equilibrium configuration under small perturbations of the initial conditions. Depending on the initial state, the energy density can approach its asymptotic value from above or below, the latter of which implies a violation of the weak energy condition. The backreaction of the quantum corrections can therefore lead to a phase of super-acceleration also in the non-interacting massive case.

  12. Search in 8 TeV proton-proton collisions with the MoEDAL monopole-trapping test array

    Science.gov (United States)

    Pinfold, J.; Soluk, R.; Lacarrère, D.; Katre, A.; Mermod, P.; Bendtz, K.; Milstead, D.

    2014-06-01

    The magnetic monopole appears in theories of spontaneous gauge symmetry breaking and its existence would explain the quantisation of electric charge. MoEDAL is the latest approved LHC experiment, designed to search directly for monopoles produced in high-energy collisions. It has now taken data for the first time. The MoEDAL detectors are based on two complementary techniques: nuclear-track detectors are sensitive to the high-ionisation signature expected from a monopole, and the magnetic monopole trapper (MMT) relies on the stopping and trapping of monopoles inside an aluminium array which is then analysed with a superconducting magnetometer. The first results obtained with the MoEDAL MMT test array deployed in 2012 are presented. This experiment probes monopoles carrying a multiple of the fundamental unit magnetic charge for the first time at the LHC.

  13. Coded aperture imaging with uniformly redundant arrays

    International Nuclear Information System (INIS)

    Fenimore, E.E.; Cannon, T.M.

    1980-01-01

    A system is described which uses uniformly redundant arrays to image non-focusable radiation. The array is used in conjunction with a balanced correlation technique to provide a system with no artifacts so that virtually limitless signal-to-noise ratio is obtained with high transmission characteristics. The array is mosaicked to reduce required detector size over conventional array detectors. 15 claims

  14. Heat generation and heating limits for the IRUS LLRW disposal facility

    International Nuclear Information System (INIS)

    Donders, R.E.; Caron, F.

    1995-10-01

    Heat generation from radioactive decay and chemical degradation must be considered when implementing low-level radioactive waste (LLRW) disposal. This is particularly important when considering the management of spent radioisotope sources. Heating considerations and temperature calculations for the proposed IRUS (Intrusion Resistant Underground Structure) near-surface disposal facility are presented. Heat transfer calculations were performed using a finite element code with realistic but somewhat conservative heat transfer parameters and environmental boundary conditions. The softening-temperature of the bitumen waste-form (38 deg C) was found to be the factor that limits the heat generation rate in the facility. This limits the IRUS heat rate, assuming a uniform source term, to 0.34 W/m 3 . If a reduced general heat-limit is considered, then some higher-heat packages can be accepted with restrictions placed on their location within the facility. For most LLRW, heat generation from radioactive decay and degradation are a small fraction of the IRUS heating limits. However, heating restrictions will impact on the disposal of higher-activity radioactive sources. High activity 60 Co sources will require decay-storage periods of about 70 years, and some 137 Cs will need to bed disposed of in facilities designed for higher-heat waste. (author). 21 refs., 8 tabs., 2 figs

  15. Max Aub, crítico e historiador literario

    Directory of Open Access Journals (Sweden)

    Francisco Caudet

    2002-11-01

    Full Text Available On the basis of similarities and dissimilarities in the consideration of authors dealt with in Discurso de la novela española contemporánea, Francisco Caudet points out Max Aub's poetics of realism stated in both his critical studies and fiction. This essay shows that Aub's contribution to the study of Mexican and Spanish literature is outstanding, not only because of his socio-historical approach, but also because of a specific perspective. This does not mean writing a "history" of literature but rather connecting creative processes. Committed during his youth to the avant-garde, Max Aub shifts after the Civil War to a type of new realistic writing. The author highlights Max Aub's activity as a critic discussing and commenting his sources. Aub's critical essays cannot be detached from his personal ideas about literary theory and practice.

  16. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  17. Eigensolutions, Shannon entropy and information energy for modified Tietz-Hua potential

    Science.gov (United States)

    Onate, C. A.; Onyeaju, M. C.; Ituen, E. E.; Ikot, A. N.; Ebomwonyi, O.; Okoro, J. O.; Dopamu, K. O.

    2018-04-01

    The Tietz-Hua potential is modified by the inclusion of De ( {{Ch - 1}/{1 - C_{h e^{{ - bh ( {r - re } )}} }}} )be^{{ - bh ( {r - re } )}} term to the Tietz-Hua potential model since a potential of such type is very good in the description and vibrational energy levels for diatomic molecules. The energy eigenvalues and the corresponding eigenfunctions are explicitly obtained using the methodology of parametric Nikiforov-Uvarov. By putting the potential parameter b = 0, in the modified Tietz-Hua potential quickly reduces to the Tietz-Hua potential. To show more applications of our work, we have computed the Shannon entropy and Information energy under the modified Tietz-Hua potential. However, the computation of the Shannon entropy and Information energy is an extension of the work of Falaye et al., who computed only the Fisher information under Tietz-Hua potential.

  18. Design and evaluation of an imaging spectrophotometer incorporating a uniform light source.

    Science.gov (United States)

    Noble, S D; Brown, R B; Crowe, T G

    2012-03-01

    Accounting for light that is diffusely scattered from a surface is one of the practical challenges in reflectance measurement. Integrating spheres are commonly used for this purpose in point measurements of reflectance and transmittance. This solution is not directly applicable to a spectral imaging application for which diffuse reflectance measurements are desired. In this paper, an imaging spectrophotometer design is presented that employs a uniform light source to provide diffuse illumination. This creates the inverse measurement geometry to the directional illumination/diffuse reflectance mode typically used for point measurements. The final system had a spectral range between 400 and 1000 nm with a 5.2 nm resolution, a field of view of approximately 0.5 m by 0.5 m, and millimeter spatial resolution. Testing results indicate illumination uniformity typically exceeding 95% and reflectance precision better than 1.7%.

  19. Determination of the axial thermal neutron flux non-uniform factor in the MNSR inner irradiation capsule

    International Nuclear Information System (INIS)

    Khattab, K.; Ghazi, N.; Omar, H.

    2007-01-01

    A 3-D neutronic model, using the WIMSD4 and CITATION codes, for the Syrian Miniature Neutron source Reactor (MNSR) is used to calculate the axial thermal neutron flux non-uniform factor in the inner irradiation capsule. The calculated result is 4%. A copper wire is used to measure the axial thermal neutron flux non-uniform factor in the inner irradiation capsule to be compared with the calculated result. The measured result is 5%. Good agreement between the measured and calculated results is obtained. (author)

  20. Determination of the axial thermal neutron flux non-uniform factor in the MNSR inner irradiation capsule

    International Nuclear Information System (INIS)

    Khattab, K.; Ghazi, N.; Omar, H.

    2007-01-01

    A 3-D neutronic model, using the WIMSD4 and CITATION codes, for the Syrian Miniature Neutron Source Reactor (MNSR) is used to calculate the axial thermal neutron flux non-uniform factor in the inner irradiation capsule. The calculated result is 4%. A copper wire is used to measure the axial thermal neutron flux non-uniform factor in the inner irradiation capsule to be compared with the calculated result. The measured result is 5%. Good agreement between the measured and calculated results is obtained

  1. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  2. SITE-94. CAMEO: A model of mass-transport limited general corrosion of copper canisters

    International Nuclear Information System (INIS)

    Worgan, K.J.; Apted, M.J.

    1996-12-01

    This report describes the technical basis for the CAMEO code, which models the general, uniform corrosion of a copper canister either by transport of corrodants to the canister, or by transport of corrosion products away from the canister. According to the current Swedish concept for final disposal of spent nuclear fuels, extremely long containment times are achieved by thick (60-100 mm) copper canisters. Each canister is surrounded by a compacted bentonite buffer, located in a saturated, crystalline rock at a depth of around 500 m below ground level. Three diffusive transport-limited cases are identified for general, uniform corrosion of copper: General corrosion rate-limited by diffusive mass-transport of sulphide to the canister surface under reducing conditions; General corrosion rate-limited by diffusive mass-transport of oxygen to the canister surface under mildly oxidizing conditions; General corrosion rate-limited by diffusive mass-transport of copper chloride away from the canister surface under highly oxidizing conditions. The CAMEO code includes general corrosion models for each of the above three processes. CAMEO is based on the well-tested CALIBRE code previously developed as a finite-difference, mass-transfer analysis code for the SKI to evaluate long-term radionuclide release and transport in the near-field. A series of scoping calculations for the general, uniform corrosion of a reference copper canister are presented

  3. Beyond NextGen: AutoMax Overview and Update

    Science.gov (United States)

    Kopardekar, Parimal; Alexandrov, Natalia

    2013-01-01

    Main Message: National and Global Needs - Develop scalable airspace operations management system to accommodate increased mobility needs, emerging airspace uses, mix, future demand. Be affordable and economically viable. Sense of Urgency. Saturation (delays), emerging airspace uses, proactive development. Autonomy is Needed for Airspace Operations to Meet Future Needs. Costs, time critical decisions, mobility, scalability, limits of cognitive workload. AutoMax to Accommodate National and Global Needs. Auto: Automation, autonomy, autonomicity for airspace operations. Max: Maximizing performance of the National Airspace System. Interesting Challenges and Path Forward.

  4. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  5. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost.

  6. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost

  7. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  8. Information and meaning revisiting Shannon's theory of communication and extending it to address todays technical problems.

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, Travis LaDell

    2009-12-01

    extend information theory to account for semantics. By developing such theory, we can improve the quality of the next generation analytical tools. Far from being a mere intellectual curiosity, a new theory can provide the means for us to take into account information that has been to date ignored by the algorithms and technologies we develop. This paper will begin with an examination of Shannon's theory of communication, discussing the contributions and the limitations of the theory and how that theory gets expanded into today's statistical text analysis algorithms. Next, we will expand Shannon's model. We'll suggest a transactional definition of semantics that focuses on the intended and actual change that messages are intended to have on the recipient. Finally, we will examine implications of the model for algorithm development.

  9. Measurement of intra-industry trade (ITT) of Iran with ten selective major trading partners using Grubel-Lloyd Index

    OpenAIRE

    Muhammad Emadi

    2016-01-01

    This paper was conducted to measure intra-industry trade of Iran with ten selective major trading partners including the United Arab Emirates, Germany, China, Republic of Korea, Italy, India, Japan, Turkey, Spain, and Singapore using Grubel-Lloyd index. Due to the development of cross-border economic relationships, these countries try to find and present an appropriate model for production, import, and export of goods and identification of business opportunities and comparative advantages. Th...

  10. The Astrophysics Source Code Library: Supporting software publication and citation

    Science.gov (United States)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  11. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  12. Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...

  13. Controlling the Shannon Entropy of Quantum Systems

    Science.gov (United States)

    Xing, Yifan; Wu, Jun

    2013-01-01

    This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking. PMID:23818819

  14. Controlling the Shannon Entropy of Quantum Systems

    Directory of Open Access Journals (Sweden)

    Yifan Xing

    2013-01-01

    Full Text Available This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking.

  15. Applying Shannon's information theory to bacterial and phage genomes and metagenomes

    Science.gov (United States)

    Akhter, Sajia; Bailey, Barbara A.; Salamon, Peter; Aziz, Ramy K.; Edwards, Robert A.

    2013-01-01

    All sequence data contain inherent information that can be measured by Shannon's uncertainty theory. Such measurement is valuable in evaluating large data sets, such as metagenomic libraries, to prioritize their analysis and annotation, thus saving computational resources. Here, Shannon's index of complete phage and bacterial genomes was examined. The information content of a genome was found to be highly dependent on the genome length, GC content, and sequence word size. In metagenomic sequences, the amount of information correlated with the number of matches found by comparison to sequence databases. A sequence with more information (higher uncertainty) has a higher probability of being significantly similar to other sequences in the database. Measuring uncertainty may be used for rapid screening for sequences with matches in available database, prioritizing computational resources, and indicating which sequences with no known similarities are likely to be important for more detailed analysis.

  16. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  17. Fundamental gravitational limitations to quantum computing

    International Nuclear Information System (INIS)

    Gambini, R.; Porto, A.; Pullin, J.

    2006-01-01

    Lloyd has considered the ultimate limitations the fundamental laws of physics place on quantum computers. He concludes in particular that for an 'ultimate laptop' (a computer of one liter of volume and one kilogram of mass) the maximum number of operations per second is bounded by 10 51 . The limit is derived considering ordinary quantum mechanics. Here we consider additional limits that are placed by quantum gravity ideas, namely the use of a relational notion of time and fundamental gravitational limits that exist on time measurements. We then particularize for the case of an ultimate laptop and show that the maximum number of operations is further constrained to 10 47 per second. (authors)

  18. Revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources

    International Nuclear Information System (INIS)

    Wheatley, J. S.

    2004-01-01

    The revised Code of Conduct on the Safety and Security of Radioactive Sources is aimed primarily at Governments, with the objective of achieving and maintaining a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations; and through the fostering of international co-operation. It focuses on sealed radioactive sources and provides guidance on legislation, regulations and the regulatory body, and import/export controls. Nuclear materials (except for sources containing 239Pu), as defined in the Convention on the Physical Protection of Nuclear Materials, are not covered by the revised Code, nor are radioactive sources within military or defence programmes. An earlier version of the Code was published by IAEA in 2001. At that time, agreement was not reached on a number of issues, notably those relating to the creation of comprehensive national registries for radioactive sources, obligations of States exporting radioactive sources, and the possibility of unilateral declarations of support. The need to further consider these and other issues was highlighted by the events of 11th September 2001. Since then, the IAEA's Secretariat has been working closely with Member States and relevant International Organizations to achieve consensus. The text of the revised Code was finalized at a meeting of technical and legal experts in August 2003, and it was submitted to IAEA's Board of Governors for approval in September 2003, with a recommendation that the IAEA General Conference adopt it and encourage its wide implementation. The IAEA General Conference, in September 2003, endorsed the revised Code and urged States to work towards following the guidance contained within it. This paper summarizes the history behind the revised Code, its content and the outcome of the discussions within the IAEA Board of Governors and General Conference. (Author) 8 refs

  19. Shannon Meets Fick on the Microfluidic Channel: Diffusion Limit to Sum Broadcast Capacity for Molecular Communication.

    Science.gov (United States)

    Bicen, A Ozan; Lehtomaki, Janne J; Akyildiz, Ian F

    2018-03-01

    Molecular communication (MC) over a microfluidic channel with flow is investigated based on Shannon's channel capacity theorem and Fick's laws of diffusion. Specifically, the sum capacity for MC between a single transmitter and multiple receivers (broadcast MC) is studied. The transmitter communicates by using different types of signaling molecules with each receiver over the microfluidic channel. The transmitted molecules propagate through microfluidic channel until reaching the corresponding receiver. Although the use of different types of molecules provides orthogonal signaling, the sum broadcast capacity may not scale with the number of the receivers due to physics of the propagation (interplay between convection and diffusion based on distance). In this paper, the performance of broadcast MC on a microfluidic chip is characterized by studying the physical geometry of the microfluidic channel and leveraging the information theory. The convergence of the sum capacity for microfluidic broadcast channel is analytically investigated based on the physical system parameters with respect to the increasing number of molecular receivers. The analysis presented here can be useful to predict the achievable information rate in microfluidic interconnects for the biochemical computation and microfluidic multi-sample assays.

  20. ARC: An open-source library for calculating properties of alkali Rydberg atoms

    Science.gov (United States)

    Šibalić, N.; Pritchard, J. D.; Adams, C. S.; Weatherill, K. J.

    2017-11-01

    We present an object-oriented Python library for the computation of properties of highly-excited Rydberg states of alkali atoms. These include single-body effects such as dipole matrix elements, excited-state lifetimes (radiative and black-body limited) and Stark maps of atoms in external electric fields, as well as two-atom interaction potentials accounting for dipole and quadrupole coupling effects valid at both long and short range for arbitrary placement of the atomic dipoles. The package is cross-referenced to precise measurements of atomic energy levels and features extensive documentation to facilitate rapid upgrade or expansion by users. This library has direct application in the field of quantum information and quantum optics which exploit the strong Rydberg dipolar interactions for two-qubit gates, robust atom-light interfaces and simulating quantum many-body physics, as well as the field of metrology using Rydberg atoms as precise microwave electrometers. Program Files doi:http://dx.doi.org/10.17632/hm5n8w628c.1 Licensing provisions: BSD-3-Clause Programming language: Python 2.7 or 3.5, with C extension External Routines: NumPy [1], SciPy [1], Matplotlib [2] Nature of problem: Calculating atomic properties of alkali atoms including lifetimes, energies, Stark shifts and dipole-dipole interaction strengths using matrix elements evaluated from radial wavefunctions. Solution method: Numerical integration of radial Schrödinger equation to obtain atomic wavefunctions, which are then used to evaluate dipole matrix elements. Properties are calculated using second order perturbation theory or exact diagonalisation of the interaction Hamiltonian, yielding results valid even at large external fields or small interatomic separation. Restrictions: External electric field fixed to be parallel to quantisation axis. Supplementary material: Detailed documentation (.html), and Jupyter notebook with examples and benchmarking runs (.html and .ipynb). [1] T.E. Oliphant

  1. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The objective of the code of conduct is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through the fostering of international co-operation. In particular, this code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost. (N.C.)

  2. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  3. Uniformity of LED light illumination in application to direct imaging lithography

    Science.gov (United States)

    Huang, Ting-Ming; Chang, Shenq-Tsong; Tsay, Ho-Lin; Hsu, Ming-Ying; Chen, Fong-Zhi

    2016-09-01

    Direct imaging has widely applied in lithography for a long time because of its simplicity and easy-maintenance. Although this method has limitation of lithography resolution, it is still adopted in industries. Uniformity of UV irradiance for a designed area is an important requirement. While mercury lamps were used as the light source in the early stage, LEDs have drawn a lot of attention for consideration from several aspects. Although LED has better and better performance, arrays of LEDs are required to obtain desired irradiance because of limitation of brightness for a single LED. Several effects are considered that affect the uniformity of UV irradiance such as alignment of optics, temperature of each LED, performance of each LED due to production uniformity, and pointing of LED module. Effects of these factors are considered to study the uniformity of LED Light Illumination. Numerical analysis is performed by assuming a serious of control factors to have a better understanding of each factor.

  4. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  5. Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma

    Science.gov (United States)

    Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.

    2017-10-01

    For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that

  6. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  7. Particle-in-cell simulation of electron trajectories and irradiation uniformity in an annular cathode high current pulsed electron beam source

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Wei; Wang, Langping, E-mail: aplpwang@hit.edu.cn; Zhou, Guangxue; Wang, Xiaofeng

    2017-02-01

    Highlights: • The transmission process of electrons and irradiation uniformity was simulated. • Influence of the irradiation parameters on irradiation uniformity are discussed. • High irradiation uniformity can be obtained in a wide processing window. - Abstract: In order to study electron trajectories in an annular cathode high current pulsed electron beam (HCPEB) source based on carbon fiber bunches, the transmission process of electrons emitted from the annular cathode was simulated using a particle-in-cell model with Monte Carlo collisions (PIC-MCC). The simulation results show that the intense flow of the electrons emitted from the annular cathode are expanded during the transmission process, and the uniformity of the electron distribution is improved in the transportation process. The irradiation current decreases with the irradiation distance and the pressure, and increases with the negative voltage. In addition, when the irradiation distance and the cathode voltage are larger than 40 mm and −15 kV, respectively, a uniform irradiation current distribution along the circumference of the anode can be obtained. The simulation results show that good irradiation uniformity of circular components can be achieved by this annular cathode HCPEB source.

  8. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    International Nuclear Information System (INIS)

    Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)

  9. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  10. Probabilities and Shannon's Entropy in the Everett Many-Worlds Theory

    Directory of Open Access Journals (Sweden)

    Andreas Wichert

    2016-12-01

    Full Text Available Following a controversial suggestion by David Deutsch that decision theory can solve the problem of probabilities in the Everett many-worlds we suggest that the probabilities are induced by Shannon's entropy that measures the uncertainty of events. We argue that a relational person prefers certainty to uncertainty due to fundamental biological principle of homeostasis.

  11. Information theory and coding solved problems

    CERN Document Server

    Ivaniš, Predrag

    2017-01-01

    This book is offers a comprehensive overview of information theory and error control coding, using a different approach then in existed literature. The chapters are organized according to the Shannon system model, where one block affects the others. A relatively brief theoretical introduction is provided at the beginning of every chapter, including a few additional examples and explanations, but without any proofs. And a short overview of some aspects of abstract algebra is given at the end of the corresponding chapters. The characteristic complex examples with a lot of illustrations and tables are chosen to provide detailed insights into the nature of the problem. Some limiting cases are presented to illustrate the connections with the theoretical bounds. The numerical values are carefully selected to provide in-depth explanations of the described algorithms. Although the examples in the different chapters can be considered separately, they are mutually connected and the conclusions for one considered proble...

  12. Automating RPM Creation from a Source Code Repository

    Science.gov (United States)

    2012-02-01

    apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make

  13. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources

    International Nuclear Information System (INIS)

    Gaufridy de Dortan, F. de

    2006-01-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  14. New multispectral MRI data fusion technique for white matter lesion segmentation: method and comparison with thresholding in FLAIR images

    International Nuclear Information System (INIS)

    Del C Valdes Hernandez, Maria; Ferguson, Karen J.; Chappell, Francesca M.; Wardlaw, Joanna M.

    2010-01-01

    Brain tissue segmentation by conventional threshold-based techniques may have limited accuracy and repeatability in older subjects. We present a new multispectral magnetic resonance (MR) image analysis approach for segmenting normal and abnormal brain tissue, including white matter lesions (WMLs). We modulated two 1.5T MR sequences in the red/green colour space and calculated the tissue volumes using minimum variance quantisation. We tested it on 14 subjects, mean age 73.3 ± 10 years, representing the full range of WMLs and atrophy. We compared the results of WML segmentation with those using FLAIR-derived thresholds, examined the effect of sampling location, WML amount and field inhomogeneities, and tested observer reliability and accuracy. FLAIR-derived thresholds were significantly affected by the location used to derive the threshold (P = 0.0004) and by WML volume (P = 0.0003), and had higher intra-rater variability than the multispectral technique (mean difference ± SD: 759 ± 733 versus 69 ± 326 voxels respectively). The multispectral technique misclassified 16 times fewer WMLs. Initial testing suggests that the multispectral technique is highly reproducible and accurate with the potential to be applied to routinely collected clinical MRI data. (orig.)

  15. Development of in-vessel source term analysis code, tracer

    International Nuclear Information System (INIS)

    Miyagi, K.; Miyahara, S.

    1996-01-01

    Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)

  16. Codes of practice and related issues in biomedical waste management

    Energy Technology Data Exchange (ETDEWEB)

    Moy, D.; Watt, C. [Griffith Univ. (Australia)

    1996-12-31

    This paper outlines the development of a National Code of Practice for biomedical waste management in Australia. The 10 key areas addressed by the code are industry mission statement; uniform terms and definitions; community relations - public perceptions and right to know; generation, source separation, and handling; storage requirements; transportation; treatment and disposal; disposal of solid and liquid residues and air emissions; occupational health and safety; staff awareness and education. A comparison with other industry codes in Australia is made. A list of outstanding issues is also provided; these include the development of standard containers, treatment effectiveness, and reusable sharps containers.

  17. Source Coding in Networks with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2016-01-01

    results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...

  18. The discrete-dipole-approximation code ADDA: capabilities and known limitations

    NARCIS (Netherlands)

    Yurkin, M.A.; Hoekstra, A.G.

    2011-01-01

    The open-source code ADDA is described, which implements the discrete dipole approximation (DDA), a method to simulate light scattering by finite 3D objects of arbitrary shape and composition. Besides standard sequential execution, ADDA can run on a multiprocessor distributed-memory system,

  19. Use of source term code package in the ELEBRA MX-850 system

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-12-01

    The implantation of source term code package in the ELEBRA-MX850 system is presented. The source term is formed when radioactive materials generated in nuclear fuel leakage toward containment and the external environment to reactor containment. The implantated version in the ELEBRA system are composed of five codes: MARCH 3, TRAPMELT 3, THCCA, VANESA and NAVA. The original example case was used. The example consists of a small loca accident in a PWR type reactor. A sensitivity study for the TRAPMELT 3 code was carried out, modifying the 'TIME STEP' to estimate the processing time of CPU for executing the original example case. (M.C.K.) [pt

  20. Scaling behaviour of Fisher and Shannon entropies for the exponential-cosine screened coulomb potential

    Science.gov (United States)

    Abdelmonem, M. S.; Abdel-Hady, Afaf; Nasser, I.

    2017-07-01

    The scaling laws are given for the entropies in the information theory, including the Shannon's entropy, its power, the Fisher's information and the Fisher-Shannon product, using the exponential-cosine screened Coulomb potential. The scaling laws are specified, in the r-space, as a function of |μ - μc, nℓ|, where μ is the screening parameter and μc, nℓ its critical value for the specific quantum numbers n and ℓ. Scaling laws for other physical quantities, such as energy eigenvalues, the moments, static polarisability, transition probabilities, etc. are also given. Some of these are reported for the first time. The outcome is compared with the available literatures' results.

  1. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  2. Governing Refugee Space: The Quasi-Carceral Regime of Amsterdam’s Lloyd Hotel, a German-Jewish Refugee Camp in the Prelude to World War II

    NARCIS (Netherlands)

    Felder, M.; Minca, C.; Ong, C.E.

    2014-01-01

    Through analysing the correspondence between key refugee camp commanders based at Amsterdam's Lloyd Hotel and different authorities involved in Dutch refugee matters, this paper examines how "the Dutch state" responded to German-Jewish refugees fleeing Nazi Germany in the prelude to World War II.

  3. Analysis list: Max [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available Max Blood,Muscle,Pluripotent stem cell + mm9 http://dbarchive.biosciencedbc.jp/kyus...hu-u/mm9/target/Max.1.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/mm9/target/Max.5.tsv http://dbarchive.bioscience...dbc.jp/kyushu-u/mm9/target/Max.10.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/mm9/colo/Max.Blood....tsv,http://dbarchive.biosciencedbc.jp/kyushu-u/mm9/colo/Max.Muscle.tsv,http://dbarchive.bioscience...dbc.jp/kyushu-u/mm9/colo/Max.Pluripotent_stem_cell.tsv http://dbarchive.biosciencedbc.jp/k

  4. Conformal Infinity

    Directory of Open Access Journals (Sweden)

    Frauendiener Jörg

    2000-08-01

    Full Text Available The notion of conformal infinity has a long history within the research in Einstein's theory of gravity. Today, ``conformal infinity'' is related with almost all other branches of research in general relativity, from quantisation procedures to abstract mathematical issues to numerical applications. This review article attempts to show how this concept gradually and inevitably evolved out of physical issues, namely the need to understand gravitational radiation and isolated systems within the theory of gravitation and how it lends itself very naturally to solve radiation problems in numerical relativity. The fundamental concept of null-infinity is introduced. Friedrich's regular conformal field equations are presented and various initial value problems for them are discussed. Finally, it is shown that the conformal field equations provide a very powerful method within numerical relativity to study global problems such as gravitational wave propagation and detection.

  5. Unitary quantum physics with time-space non-commutativity

    International Nuclear Information System (INIS)

    Balachandran, A P; Govindarajan, T R; Martins, A G; Molina, C; Teotonio-Sobrinho, P

    2005-01-01

    In these lectures 4 quantum physics in noncommutative spacetime is developed. It is based on the work of Doplicher et al. which allows for time-space noncommutativity. In the context of noncommutative quantum mechanics, some important points are explored, such as the formal construction of the theory, symmetries, causality, simultaneity and observables. The dynamics generated by a noncommutative Schroedinger equation is studied. The theory is further extended to certain noncommutative versions of the cylinder, R 3 and R x S 3 . In all these models, only discrete time translations are possible. One striking consequence of quantised time translations is that even though a time independent Hamiltonian is an observable, in scattering processes, it is conserved only modulo 2π/θ, where θ is the noncommutative parameter. Scattering theory is formulated and an approach to quantumfield theory is outlined

  6. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  7. Quantum Optical Heating in Sonoluminescence Experiments

    International Nuclear Information System (INIS)

    Kurcz, Andreas; Capolupo, Antonio; Beige, Almut

    2009-01-01

    Sonoluminescence occurs when tiny bubbles rilled with noble gas atoms are driven by a sound wave. Each cycle of the driving field is accompanied by a collapse phase in which the bubble radius decreases rapidly until a short but very strong light flash is emitted. The spectrum of the light corresponds to very high temperatures and hints at the presence of a hot plasma core. While everyone accepts that the effect is real, the main energy focussing mechanism is highly controversial. Here we suggest that the heating of the bubble might be due to a weak but highly inhomogeneous electric field as it occurs during rapid bubble deformations [A. Kurcz et al.(submitted)]. It is shown that such a field couples the quantised motion of the atoms to their electronic states, thereby resulting in very high heating rates.

  8. Quantum gravity

    International Nuclear Information System (INIS)

    Isham, C.

    1989-01-01

    Gravitational effects are seen as arising from a curvature in spacetime. This must be reconciled with gravity's apparently passive role in quantum theory to achieve a satisfactory quantum theory of gravity. The development of grand unified theories has spurred the search, with forces being of equal strength at a unification energy of 10 15 - 10 18 GeV, with the ''Plank length'', Lp ≅ 10 -35 m. Fundamental principles of general relativity and quantum mechanics are outlined. Gravitons are shown to have spin-0, as mediators of gravitation force in the classical sense or spin-2 which are related to the quantisation of general relativity. Applying the ideas of supersymmetry to gravitation implies partners for the graviton, especially the massless spin 3/2 fermion called a gravitino. The concept of supersymmetric strings is introduced and discussed. (U.K.)

  9. Instantons from geodesics in AdS moduli spaces

    Science.gov (United States)

    Ruggeri, Daniele; Trigiante, Mario; Van Riet, Thomas

    2018-03-01

    We investigate supergravity instantons in Euclidean AdS5 × S5/ℤk. These solutions are expected to be dual to instantons of N = 2 quiver gauge theories. On the supergravity side the (extremal) instanton solutions are neatly described by the (lightlike) geodesics on the AdS moduli space for which we find the explicit expression and compute the on-shell actions in terms of the quantised charges. The lightlike geodesics fall into two categories depending on the degree of nilpotency of the Noether charge matrix carried by the geodesic: for degree 2 the instantons preserve 8 supercharges and for degree 3 they are non-SUSY. We expect that these findings should apply to more general situations in the sense that there is a map between geodesics on moduli-spaces of Euclidean AdS vacua and instantons with holographic counterparts.

  10. Conformal Infinity.

    Science.gov (United States)

    Frauendiener, Jörg

    2004-01-01

    The notion of conformal infinity has a long history within the research in Einstein's theory of gravity. Today, "conformal infinity" is related to almost all other branches of research in general relativity, from quantisation procedures to abstract mathematical issues to numerical applications. This review article attempts to show how this concept gradually and inevitably evolved from physical issues, namely the need to understand gravitational radiation and isolated systems within the theory of gravitation, and how it lends itself very naturally to the solution of radiation problems in numerical relativity. The fundamental concept of null-infinity is introduced. Friedrich's regular conformal field equations are presented and various initial value problems for them are discussed. Finally, it is shown that the conformal field equations provide a very powerful method within numerical relativity to study global problems such as gravitational wave propagation and detection.

  11. Methodologies for the practical determination and use of method detection limits

    International Nuclear Information System (INIS)

    Rucker, T.L.

    1995-01-01

    Method detection limits have often been misunderstood and misused. The basic definitions developed by Lloyd Currie and others have been combined with assumptions that are inappropriate for many types of radiochemical analyses. A partical way for determining detection limits based on Currie's basic definition is presented that removes the reliance on assumptions and that accounts for the total measurement uncertainty. Examples of proper and improper use of detection limits are also presented, including detection limits reported by commercial software for gamma spectroscopy and neutron activation analyses. (author) 6 refs.; 2 figs

  12. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  13. Determination of non-uniformity correction factors for cylindrical ionization chambers close to 192Ir brachytherapy sources

    International Nuclear Information System (INIS)

    Toelli, H.; Bielajew, A. F.; Mattsson, O.; Sernbo, G.

    1995-01-01

    When ionization chambers are used in brachytherapy dosimetry, the measurements must be corrected for the non-uniformity of the incident photon fluence. The theory for determination of non-uniformity correction factors, developed by Kondo and Randolph (Rad. Res. 1960) assumes that the electron fluence within the air cavity is isotropic and does not take into account material differences in the chamber wall. The theory was extended by Bielajew (PMB 1990) using an anisotropic electron angular fluence in the cavity. In contrast to the theory by Kondo and Randolph, the anisotropic theory predicts a wall material dependence in the non-uniformity correction factors. This work presents experimental determination of non-uniformity correction factors at distances between 10 and 140 mm from an Ir-192 source. The experimental work makes use of a PTW23331-chamber and Farmer-type chambers (NE2571 and NE2581) with different materials in the walls. The results of the experiments agree well with the anisotropic theory. Due to the geometrical shape of the NE-type chambers, it is shown that the full length of the these chambers, 24.1mm, is not an appropriate input parameter when theoretical non-uniformity correction factors are evaluated

  14. Modern housing design: prefabricated and modular design in Frank Lloyd Wright's architecture

    Directory of Open Access Journals (Sweden)

    Ana Tagliari

    2011-12-01

    Full Text Available This paper investigates the residential architecture of Frank Lloyd Wright, especially the designs which were conceived from an idea of prefabricated, modular, low-cost, and high-scale construction. Wright's organic designs originated from a material-based grid, which at the same time organized and provided freedom to create spaces and forms. This study reviews Wright's work, from his first Midwest designs that relied on brick, through an intermediary phase in California when he made intense use of concrete blocks, until his last phase, the usonian houses, which featured wood paneling. During his early career, the concept and the methodology of Wright's ideas greatly contributed to a better understanding of his architecture, his apprentices and his followers. The economy and rationalization found in the projects reviewed are of great importance as the analysis of historical proposals helps us understand the topic in question.

  15. Improvement of uniformity of the negative ion beams by tent-shaped magnetic field in the JT-60 negative ion source

    International Nuclear Information System (INIS)

    Yoshida, Masafumi; Hanada, Masaya; Kojima, Atsushi; Kashiwagi, Mieko; Akino, Noboru; Endo, Yasuei; Komata, Masao; Mogaki, Kazuhiko; Nemoto, Shuji; Ohzeki, Masahiro; Seki, Norikazu; Sasaki, Shunichi; Shimizu, Tatsuo; Terunuma, Yuto; Grisham, Larry R.

    2014-01-01

    Non-uniformity of the negative ion beams in the JT-60 negative ion source with the world-largest ion extraction area was improved by modifying the magnetic filter in the source from the plasma grid (PG) filter to a tent-shaped filter. The magnetic design via electron trajectory calculation showed that the tent-shaped filter was expected to suppress the localization of the primary electrons emitted from the filaments and created uniform plasma with positive ions and atoms of the parent particles for the negative ions. By modifying the magnetic filter to the tent-shaped filter, the uniformity defined as the deviation from the averaged beam intensity was reduced from 14% of the PG filter to ∼10% without a reduction of the negative ion production

  16. BAR-MOM code and its application

    International Nuclear Information System (INIS)

    Wang Shunuan

    2002-01-01

    BAR-MOM code for calculating the height of the fission barrier Bf , the energy of the ground state is presented; the compound nucleus stability by limit with respect to fission, i.e., the angular momentum (the spin value) L max at which the fission barrier disappears, the three principal axis moments of inertia at saddle point for a certain nucleus with atomic number Z, atomic mass number A and angular momentum L in units of ℎ for 19< Z<102, and the model used are introduced briefly. The generalized BAR-MOM code to include the results for Z ≥ 102 by using more recent parameterization of the Thomas Fermi fission barrier is also introduced briefly. We have learned the models used in Code BAR-MOM, and run it successfully and correctly for a certain nucleus with atomic mass number A, atomic number Z, and angular momentum L on PC by Fortran-90. The testing calculation values to check the implementation of the program show that the results of the present work are in good agreement with the original one

  17. Class of near-perfect coded apertures

    International Nuclear Information System (INIS)

    Cannon, T.M.; Fenimore, E.E.

    1978-01-01

    The encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. If the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. The authors propose to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method. The correlation of the decoding array with the aperture results in a delta function with deterministically zero sidelobes. It is shown that the reconstructed image in the URA system contains virtually uniform noise regardless of the structure in the original source. Therefore, the improvement over a single pinhole camera will be relatively larger for the brighter points in the source than for the low intensity points. 12 refs

  18. Memory-efficient decoding of LDPC codes

    Science.gov (United States)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  19. Max Algebraic Complementary Basic Matrices

    Czech Academy of Sciences Publication Activity Database

    Fiedler, Miroslav; Hall, F.J.

    2014-01-01

    Roč. 457, 15 September (2014), s. 287-292 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : CB-matrix * Max algebra * Max permanent * Max eigenvalues Subject RIV: BA - General Mathematics Impact factor: 0.939, year: 2014

  20. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  1. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  2. Performance evaluation based on data from code reviews

    OpenAIRE

    Andrej, Sekáč

    2016-01-01

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in ...

  3. SUSTAINABLE ORIGINS IN ARCHITECTURE OF FRANK LLOYD WRIGHT

    Directory of Open Access Journals (Sweden)

    Martina Zbašnik-Senegačnik

    2012-10-01

    Full Text Available Frank Lloyd Wright is the greatest American architect and oneof the greatest architects the world. His career began at theend of the 19th century, during the great architectural boom inChicago, under the mentorship of Louis Henry Sullivan, fromwhom he adopted and then perfected the concepts of organicarchitecture and the Prairie house. During the Depressionyears, Wright developed a cheaper and simpler variant of thePrairie house: the Usonian house.Wright's architecture is characterised by an entirely newapproach to building design, particularly the design of houses.He reduced the number of rooms by combining their functionsin a large living space with a central fireplace. He used largeglazed areas to connect the external environment of the housewith the interior. The natural environment of the prairie wasthe inspiration for the horizontal lines that characterised hisarchitecture. His buildings are low in height, close to humanscale and with a great feeling for the natural setting in whichthey are built. He selected materials from the surroundingarea and the principal decoration of his architecture was thenatural structure of the material.The paper presents the ideas of organic architecture, thePrairie house, the Usonian house, along with the best examplesof Wright's architecture and the criteria he employed in theselection of materials and construction technologies. Theenvironmental aspect of his philosophy of the use of materialsis considered in the discussion section.Wright may be considered a pioneer of sustainable architecture.

  4. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  5. maxAlike

    DEFF Research Database (Denmark)

    Menzel, Karl Peter; Stadler, Peter F.; Gorodkin, Jan

    2011-01-01

    MOTIVATION: The task of reconstructing a genomic sequence from a particular species is gaining more and more importance in the light of the rapid development of high-throughput sequencing technologies and their limitations. Applications include not only compensation for missing data in unsequenced...... genomic regions and the design of oligonucleotide primers for target genes in species with lacking sequence information but also the preparation of customized queries for homology searches. RESULTS: We introduce the maxAlike algorithm, which reconstructs a genomic sequence for a specific taxon based...... on sequence homologs in other species. The input is a multiple sequence alignment and a phylogenetic tree that also contains the target species. For this target species, the algorithm computes nucleotide probabilities at each sequence position. Consensus sequences are then reconstructed based on a certain...

  6. Microdosimetry computation code of internal sources - MICRODOSE 1

    International Nuclear Information System (INIS)

    Li Weibo; Zheng Wenzhong; Ye Changqing

    1995-01-01

    This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs

  7. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  8. Typical performance of regular low-density parity-check codes over general symmetric channels

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Toshiyuki [Department of Electronics and Information Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397 (Japan); Saad, David [Neural Computing Research Group, Aston University, Aston Triangle, Birmingham B4 7ET (United Kingdom)

    2003-10-31

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.

  9. Typical performance of regular low-density parity-check codes over general symmetric channels

    International Nuclear Information System (INIS)

    Tanaka, Toshiyuki; Saad, David

    2003-01-01

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models

  10. Automated searching for quantum subsystem codes

    International Nuclear Information System (INIS)

    Crosswhite, Gregory M.; Bacon, Dave

    2011-01-01

    Quantum error correction allows for faulty quantum systems to behave in an effectively error-free manner. One important class of techniques for quantum error correction is the class of quantum subsystem codes, which are relevant both to active quantum error-correcting schemes as well as to the design of self-correcting quantum memories. Previous approaches for investigating these codes have focused on applying theoretical analysis to look for interesting codes and to investigate their properties. In this paper we present an alternative approach that uses computational analysis to accomplish the same goals. Specifically, we present an algorithm that computes the optimal quantum subsystem code that can be implemented given an arbitrary set of measurement operators that are tensor products of Pauli operators. We then demonstrate the utility of this algorithm by performing a systematic investigation of the quantum subsystem codes that exist in the setting where the interactions are limited to two-body interactions between neighbors on lattices derived from the convex uniform tilings of the plane.

  11. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  12. A development of the Gibbs potential of a quantised system made up of a large number of particles. III. The contribution of binary collisions

    International Nuclear Information System (INIS)

    BLOCH, Claude; DE DOMINICIS, Cyrano

    1959-01-01

    Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low-density systems at low temperature. In the zero density limit, it reduces to the Beth-Uhlenbeck expression for the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp ( β / Δ /), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). It satisfies an equation generalizing the Bethe-Goldstone equation for an arbitrary temperature. Reprint of a paper published in Nuclear Physics, 10, p. 509-526, 1959

  13. Parametric scaling from species relative abundances to absolute abundances in the computation of biological diversity: a first proposal using Shannon's entropy.

    Science.gov (United States)

    Ricotta, Carlo

    2003-01-01

    Traditional diversity measures such as the Shannon entropy are generally computed from the species' relative abundance vector of a given community to the exclusion of species' absolute abundances. In this paper, I first mention some examples where the total information content associated with a given community may be more adequate than Shannon's average information content for a better understanding of ecosystem functioning. Next, I propose a parametric measure of statistical information that contains both Shannon's entropy and total information content as special cases of this more general function.

  14. Logarithmic terms in entanglement entropies of 2D quantum critical points and Shannon entropies of spin chains.

    Science.gov (United States)

    Zaletel, Michael P; Bardarson, Jens H; Moore, Joel E

    2011-07-08

    Universal logarithmic terms in the entanglement entropy appear at quantum critical points (QCPs) in one dimension (1D) and have been predicted in 2D at QCPs described by 2D conformal field theories. The entanglement entropy in a strip geometry at such QCPs can be obtained via the "Shannon entropy" of a 1D spin chain with open boundary conditions. The Shannon entropy of the XXZ chain is found to have a logarithmic term that implies, for the QCP of the square-lattice quantum dimer model, a logarithm with universal coefficient ±0.25. However, the logarithm in the Shannon entropy of the transverse-field Ising model, which corresponds to entanglement in the 2D Ising conformal QCP, is found to have a singular dependence on the replica or Rényi index resulting from flows to different boundary conditions at the entanglement cut.

  15. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    International Nuclear Information System (INIS)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.; Xu Jin; Connors, Alanna; Freeman, Peter E.; Zezas, Andreas

    2010-01-01

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error), and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper

  16. 3ds Max 2012 Bible

    CERN Document Server

    Murdock, Kelly L

    2011-01-01

    Updated version of the bestselling 3ds Max book on the market 3ds Max 2012 Bible is one of the most popular 3ds Max how-tos on the market. If you're a beginner just itching to create something right away, the Quick Start project in Part 1 is for you. If you're an experienced user checking out 3ds Max 2012's latest and greatest features, you'll love the fact that the 3ds Max 2012 Bible continues to be the most comprehensive reference on this highly complex application.Find out what's new, what's tried and true, and how creative you can get using the tips, tricks, and techniques in this must-hav

  17. Low-emittance uniform density Cs+ sources for heavy ion fusion accelerators studies

    International Nuclear Information System (INIS)

    Eylon, S.; Henestroza, E.; Garvey, T.; Johnson, R.; Chupp, W.

    1991-04-01

    Low-emittance (high-brightness) Cs + thermionic sources were developed for the heavy ion induction linac experiment MBE-4 at LBL. The MBE-4 linac accelerates four 10 mA beams from 200 ke V to 900 ke V while amplifying the current up to a factor of nine. Recent studies of the transverse beam dynamics suggested that characteristics of the injector geometry were contributing to the normalized transverse emissions growth. Phase-space and current density distribution measurements of the beam extracted from the injector revealed overfocusing of the outermost rays causing a hollow density profile. We shall report on the performance of a 5 mA scraped beam source (which eliminates the outermost beam rays in the diode) and on the design of an improved 10 mA source. The new source is based on EGUN calculations which indicated that a beam with good emissions and uniform current density could be obtained by modifying the cathode Pierce electrodes and using a spherical emitting surface. The measurements of the beam current density profile on a test stand were found to be in agreement with the numerical simulations. 3 refs., 6 figs

  18. Ideal flood field images for SPECT uniformity correction

    International Nuclear Information System (INIS)

    Oppenheim, B.E.; Appledorn, C.R.

    1984-01-01

    Since as little as 2.5% camera non-uniformity can cause disturbing artifacts in SPECT imaging, the ideal flood field images for uniformity correction would be made with the collimator in place using a perfectly uniform sheet source. While such a source is not realizable the equivalent images can be generated by mapping the activity distribution of a Co-57 sheet source and correcting subsequent images of the source with this mapping. Mapping is accomplished by analyzing equal-time images of the source made in multiple precisely determined positions. The ratio of counts detected in the same region of two images is a measure of the ratio of the activities of the two portions of the source imaged in that region. The activity distribution in the sheet source is determined from a set of such ratios. The more source positions imaged in a given time, the more accurate the source mapping, according to results of a computer simulation. A 1.9 mCi Co-57 sheet source was shifted by 12 mm increments along the horizontal and vertical axis of the camera face to 9 positions on each axis. The source was imaged for 20 min in each position and 214 million total counts were accumulated. The activity distribution of the source, relative to the center pixel, was determined for a 31 x 31 array. The integral uniformity was found to be 2.8%. The RMS error for such a mapping was determined by computer simulation to be 0.46%. The activity distribution was used to correct a high count flood field image for non-uniformities attributable to the Co-57 source. Such a corrected image represents camera plus collimator response to an almost perfectly uniform sheet source

  19. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  20. Relationship between running kinematic changes and time limit at vVO2max. DOI: http://dx.doi.org/10.5007/1980-0037.2012v14n4p428

    Directory of Open Access Journals (Sweden)

    Sebastião Iberes Lopes Melo

    2012-07-01

    Full Text Available DOI: http://dx.doi.org/10.5007/1980-0037.2012v14n4p428Exhaustive running at maximal oxygen uptake velocity (vVO2max can alter running kinematic parameters and increase energy cost along the time. The aims of the present study were to compare characteristics of ankle and knee kinematics during running at vVO2max and to verify the relationship between changes in kinematic variables and time limit (Tlim. Eleven male volunteers, recreational players of team sports, performed an incremental running test until volitional exhaustion to determine vVO2max and a constant velocity test at vVO2max. Subjects were filmed continuously from the left sagittal plane at 210 Hz for further kinematic analysis. The maximal plantar flexion during swing (p<0.01 was the only variable that increased significantly from beginning to end of the run. Increase in ankle angle at contact was the only variable related to Tlim (r=0.64; p=0.035 and explained 34% of the performance in the test. These findings suggest that the individuals under study maintained a stable running style at vVO2max and that increase in plantar flexion explained the performance in this test when it was applied in non-runners.

  1. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources; SCRIC: un code pour calculer l'absorption et l'emission detaillees de plasmas hors equilibre, inhomogenes et etendus; application aux sources EUV a base de xenon

    Energy Technology Data Exchange (ETDEWEB)

    Gaufridy de Dortan, F. de

    2006-07-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  2. Observational goals for Max '91 to identify the causative agent for impulsive bursts

    International Nuclear Information System (INIS)

    Batchelor, D.A.

    1989-01-01

    Recent studies of impulsive hard x ray and microwave bursts suggest that a propagating causative agent with a characteristic velocity of the order of 1000 km/s is responsible for these bursts. The results of these studies are summarized and observable distinguishing characteristics of the various possible agents are highlighted, with emphasis on key observational goals for the Max '91 campaigns. The most likely causative agents suggested by the evidence are shocks, thermal conduction fronts, and propagating modes of magnetic reconnection in flare plasmas. With new instrumentation planned for Max '91, high spatial resolution observations of hard x ray sources have the potential to identify the agent by revealing detailed features of source spatial evolution. Observations with the Very Large Array and other radio imaging instruments are of great importance, as well as detailed modeling of coronal loop structures to place limits on their density and temperature profiles. With the combined hard x ray and microwave imaging observations, aided by loop model results, the simplest causative agent to rule out would be the propagating modes of magnetic reconnection. To fit the observational evidence, reconnection modes would need to travel at approximately the same velocity (the Alfven velocity) in different coronal structures that vary in length by a factor of 10(exp 3). Over such a vast range in loop lengths, it is difficult to believe that the Alfven velocity is constant. Thermal conduction fronts would be suggested by sources that expand along the direction of B and exhibit relatively little particle precipitation. Particle acceleration due to shocks could produce more diverse radially expanding source geometries with precipitation at loop footprints

  3. A max version of Perron--Frobenius theorem for nonnegative tensor

    OpenAIRE

    Afshin, Hamid Reza; Shojaeifard, Ali Reza

    2015-01-01

    In this paper we generalize the max algebra system of nonnegative matrices to the class of nonnegative tensors and derive its fundamental properties. If $\\mathbb{A} \\in \\Re_ + ^{\\left[ {m,n} \\right]}$ is a nonnegative essentially positive tensor such that satisfies the condition class NC, we prove that there exist $\\mu \\left( \\mathbb{A} \\right)$ and a corresponding positive vector $x$ such that $\\mathop {\\max }\\limits_{1 \\le{i_2}\\cdots {i_m} \\le n} \\left\\{ {{a_{i{i_2}\\cdots {i_m}}}{x_{{i_2}}}...

  4. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    Directory of Open Access Journals (Sweden)

    Sandeep Kakde

    2017-12-01

    Full Text Available For binary field and long code lengths, Low Density Parity Check (LDPC code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algorithm. VLSI Architecture is proposed which uses the value re-use property of min-sum algorithm and gives high throughput. The proposed work has been implemented and tested on Xilinx Virtex 5 FPGA. The MATLAB result of LDPC decoder for low bit error rate (BER gives bit error rate in the range of 10-1 to 10-3.5 at SNR=1 to 2 for 20 no of iterations. So it gives good bit error rate performance. The latency of the parallel design of LDPC decoder has also reduced. It has accomplished 141.22 MHz maximum frequency and throughput of 2.02 Gbps while consuming less area of the design.

  5. GRHydro: a new open-source general-relativistic magnetohydrodynamics code for the Einstein toolkit

    International Nuclear Information System (INIS)

    Mösta, Philipp; Haas, Roland; Ott, Christian D; Reisswig, Christian; Mundim, Bruno C; Faber, Joshua A; Noble, Scott C; Bode, Tanja; Löffler, Frank; Schnetter, Erik

    2014-01-01

    We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the toolkit builds upon previous releases and implements the evolution of relativistic magnetized fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both constrained transport and hyperbolic divergence cleaning schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfvén waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code’s performance in curved spacetimes with spherical accretion onto a black hole on a fixed background spacetime and in fully dynamical spacetimes by evolutions of a magnetized polytropic neutron star and of the collapse of a magnetized stellar core. Our results agree well with exact solutions where these are available and we demonstrate convergence. All code and input files used to generate the results are available on http://einsteintoolkit.org. This makes our work fully reproducible and provides new users with an introduction to applications of the code. (paper)

  6. A min-max variational principle

    International Nuclear Information System (INIS)

    Georgiev, P.G.

    1995-11-01

    In this paper a variational principle for min-max problems is proved that is of the same spirit as Deville-Godefroy-Zizler's variational principle for minimization problems. A localization theorem in which the mini-max points for the perturbed function with respect top a given ε-min-max point are localized is presented. 3 refs

  7. New data towards the development of a comprehensive taphonomic framework for the Late Jurassic Cleveland-Lloyd Dinosaur Quarry, Central Utah

    Directory of Open Access Journals (Sweden)

    Joseph E. Peterson

    2017-06-01

    Full Text Available The Cleveland-Lloyd Dinosaur Quarry (CLDQ is the densest deposit of Jurassic theropod dinosaurs discovered to date. Unlike typical Jurassic bone deposits, it is dominated by the presence of Allosaurus fragilis. Since excavation began in the 1920s, numerous hypotheses have been put forward to explain the taphonomy of CLDQ, including a predator trap, a drought assemblage, and a poison spring. In an effort to reconcile the various interpretations of the quarry and reach a consensus on the depositional history of CLDQ, new data is required to develop a robust taphonomic framework congruent with all available data. Here we present two new data sets that aid in the development of such a robust taphonomic framework for CLDQ. First, x-ray fluorescence of CLDQ sediments indicate elevated barite and sulfide minerals relative to other sediments from the Morrison Formation in the region, suggesting an ephemeral environment dominated by periods of hypereutrophic conditions during bone accumulation. Second, the degree of abrasion and hydraulic equivalency of small bone fragments dispersed throughout the matrix were analyzed from CLDQ. Results of these analyses suggest that bone fragments are autochthonous or parautochthonous and are derived from bones deposited in the assemblage rather than transported. The variability in abrasion exhibited by the fragments is most parsimoniously explained by local periodic re-working and re-deposition during seasonal fluctuations throughout the duration of the quarry assemblage. Collectively, these data support previous interpretations that the CLDQ represents an attritional assemblage in a poorly-drained overbank deposit where vertebrate remains were introduced post-mortem to an ephemeral pond during flood conditions. Furthermore, while the elevated heavy metals detected at the Cleveland-Lloyd Dinosaur Quarry are not likely the primary driver for the accumulation of carcasses, they are likely the result of multiple sources

  8. Breakdown of the dissipationless quantum Hall state: Quantised steps and analogies with classical and quantum fluid dynamics

    International Nuclear Information System (INIS)

    Eaves, L.

    2001-01-01

    The breakdown of the integer quantum Hall effect at high currents sometimes occurs a series of regular steps in the dissipative voltage drop bars used to maintain the US Resistance Standard, but have also been reported in other devices. It is proposed that the origin of the steps can be understood in terms of instability in the dissipationless flow at high electron drift velocities. The instability is induced by impurity- or defect- related inter-Landau level scattering processes in local macroscopic regions of the Hall bar. Electron-hole pairs (magneto-excitons) are generated in the quantum Hall fluid in these regions and that the electronic motion can be envisaged as a quantum analogue of the Karman vortex street which forms when a classical fluid flows past an obstacle. (author)

  9. The use and misuse of V(c,max) in Earth System Models.

    Science.gov (United States)

    Rogers, Alistair

    2014-02-01

    Earth System Models (ESMs) aim to project global change. Central to this aim is the need to accurately model global carbon fluxes. Photosynthetic carbon dioxide assimilation by the terrestrial biosphere is the largest of these fluxes, and in many ESMs is represented by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis. The maximum rate of carboxylation by the enzyme Rubisco, commonly termed V c,max, is a key parameter in the FvCB model. This study investigated the derivation of the values of V c,max used to represent different plant functional types (PFTs) in ESMs. Four methods for estimating V c,max were identified; (1) an empirical or (2) mechanistic relationship was used to relate V c,max to leaf N content, (3) V c,max was estimated using an approach based on the optimization of photosynthesis and respiration or (4) calibration of a user-defined V c,max to obtain a target model output. Despite representing the same PFTs, the land model components of ESMs were parameterized with a wide range of values for V c,max (-46 to +77% of the PFT mean). In many cases, parameterization was based on limited data sets and poorly defined coefficients that were used to adjust model parameters and set PFT-specific values for V c,max. Examination of the models that linked leaf N mechanistically to V c,max identified potential changes to fixed parameters that collectively would decrease V c,max by 31% in C3 plants and 11% in C4 plants. Plant trait data bases are now available that offer an excellent opportunity for models to update PFT-specific parameters used to estimate V c,max. However, data for parameterizing some PFTs, particularly those in the Tropics and the Arctic are either highly variable or largely absent.

  10. "The Sky's Things", |xam Bushman 'Astrological Mythology' as recorded in the Bleek and Lloyd Manuscripts

    Science.gov (United States)

    Hollman, J. C.

    2007-07-01

    The Bleek and Lloyd Manuscripts are an extraordinary resource that comprises some 12 000 pages of |xam Bushman beliefs collected in the 1870s in Cape Town, South Africa. About 17% of the collection concerns beliefs and observations of celestial bodies. This paper summarises |xam knowledge about the origins of the celestial bodies as recorded in the manuscripts and situates this within the larger context of the |xam worldview. The stars and planets originate from a mythological past in which they lived as 'people' who hunted and gathered as the |xam did in the past, but who also had characteristics that were to make them the entities that we recognise today. Certain astronomical bodies have consciousness and supernatural potency. They exert an influence over people's everyday lives.

  11. The hypothesis of statistical jet evolution confronted with e+e- annihilation data

    International Nuclear Information System (INIS)

    Ochs, W.

    1983-07-01

    We describe the process e + e - -> hadrons in a 'dynamical' phase space, where energy and momentum are quantised in a volume, which expands with velocity of light in a sequence of discrete time steps. Our hypothesis of statistical evolution which is based on an appropriate application of the equipartition principle, determines uniquely the distribution over the resolvable states in this dynamical phase space and leads to a branching process. Neglecting all degrees of freedom except energy and momentum, and restricting to final state pions we arrive at a minimal model with no other parameters than h, c and msub(π). We compare this model in detail with data on multiplicities, inclusive spectra and energy-energy correlations; new energy flow measurements will be proposed. The low energy region (1 < W < 5 GeV) may provide a clue on the role of color as a new degree of freedom. (orig.)

  12. Landau degeneracy and black hole entropy

    International Nuclear Information System (INIS)

    Costa, M.S.; Perry, M.J.

    1998-01-01

    We consider the supergravity solution describing a configuration of intersecting D4-branes with non-vanishing world-volume gauge fields. The entropy of such a black hole is calculated in terms of the D-branes quantised charges. The non-extreme solution is also considered and the corresponding thermodynamical quantities are calculated in terms of a D-brane/anti-D-brane system. To perform the quantum mechanical D-brane analysis we study open strings with their ends on branes with a magnetic condensate. Applying the results to our D-brane system we manage to have a perfect agreement between the D-brane entropy counting and the corresponding semi-classical result. The Landau degeneracy of the open string states describing the excitations of the D-brane system enters in a crucial way. We also derive the near-extreme results which agree with the semi-classical calculations. (orig.)

  13. A Csup(*)-algebra approach to the Schwinger model

    International Nuclear Information System (INIS)

    Carey, A.L.; Hurst, C.A.

    1981-01-01

    If cutoffs are introduced then existing results in the literature show that the Schwinger model is dynamically equivalent to a boson model with quadratic Hamiltonian. However, the process of quantising the Schwinger model destroys local gauge invariance. Gauge invariance is restored by the addition of a counterterm, which may be seen as a finite renormalisation, whereupon the Schwinger model becomes dynamically equivalent to a linear boson gauge theory. This linear model is exactly soluble. We find that different treatments of the supplementary (i.e. Lorentz) condition lead to boson models with rather different properties. We choose one model and construct, from the gauge invariant subalgebra, a class of inequivalent charge sectors. We construct sectors which coincide with those found by Lowenstein and Swieca for the Schwinger model. A reconstruction of the Hilbert space on which the Schwinger model exists is described and fermion operators on this space are defined. (orig.)

  14. Real Time Management of the AD Schottky/BTF Beam Measurement

    CERN Document Server

    Angoletta, Maria Elena

    2003-01-01

    The AD Schottky and BTF system relies on rapid acquisition and analysis of beam quantisation noise during the AD cycle which is based on an embedded receiver and digital signal processing board hosted in a VME system. The software running in the VME sets up the embedded system and amplifiers, interfaces to the RF and control system, manages the execution speed and sequence constraints with respect to the various operating modes, schedules measurements during the AD cycle and performs post processing taking into account the beam conditions in an autonomous way. The operating modes of the instrument dynamically depend on a detailed configuration, the beam parameters during the AD cycle and optional user interaction. Various subsets of the processed data are available on line and in quasi real time for beam intensity, momentum spread and several spectrum types, which form an important part of AD operation today.

  15. Real time management of the AD Schottky/BTF beam measurement system

    CERN Document Server

    Ludwig, M

    2003-01-01

    The AD Schottky and BTF system relies on rapid acquisition and analysis of beam quantisation noise during the AD cycle which is based on an embedded receiver and digital signal processing board hosted in a VME system. The software running in the VME sets up the embedded system and amplifiers, interfaces to the RF and control system, manages the execution speed and sequence constraints with respect to the various operating modes, schedules measurements during the AD cycle and performs post processing taking into account the beam conditions in an autonomous way. The operating modes of the instrument dynamically depend on a detailed configuration, the beam parameters during the AD cycle and optional user interaction. Various subsets of the processed data are available on line and in quasi real time for beam intensity, momentum spread and several spectrum types, which form an important part of AD operation today.

  16. Non-Abelian T-duality and the AdS/CFT correspondence: New N=1 backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Itsios, Georgios, E-mail: gitsios@upatras.gr [Department of Engineering Sciences, University of Patras, 26110 Patras (Greece); Department of Mathematics, University of Surrey, Guildford GU2 7XH (United Kingdom); Núñez, Carlos, E-mail: c.nunez@swansea.ac.uk [Swansea University, School of Physical Sciences, Singleton Park, Swansea SA2 8PP (United Kingdom); Sfetsos, Konstadinos, E-mail: k.sfetsos@surrey.ac.uk [Department of Mathematics, University of Surrey, Guildford GU2 7XH (United Kingdom); Department of Engineering Sciences, University of Patras, 26110 Patras (Greece); Thompson, Daniel C., E-mail: dthompson@tena4.vub.ac.be [Theoretische Natuurkunde, Vrije Universiteit Brussel (Belgium); International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium)

    2013-08-01

    We consider non-Abelian T-duality on N=1 supergravity backgrounds possessing well understood field theory duals. For the case of D3-branes at the tip of the conifold, we dualise along an SU(2) isometry. The result is a type-IIA geometry whose lift to M-theory is of the type recently proposed by Bah et al. as the dual to certain N=1 SCFT quivers produced by M5-branes wrapping a Riemann surface. In the non-conformal cases we find smooth duals in massive IIA supergravity with a Romans mass naturally quantised. We initiate the interpretation of these geometries in the context of AdS/CFT correspondence. We show that the central charge and the entanglement entropy are left invariant by this dualisation. The backgrounds suggest a form of Seiberg duality in the dual field theories which also exhibit domain walls and confinement in the infrared.

  17. Magnetic monopole searches with the MoEDAL experiment at the LHC

    CERN Document Server

    Pinfold, J; Lacarrère, D; Mermod, P; Katre, A

    2014-01-01

    The magnetic monopole appears in theories of spontaneous ga uge symmetry breaking and its existence would explain the quantisation of electric charg e. MoEDAL is the latest approved LHC experiment, designed to search directly for monopoles. It h as now taken data for the first time. The MoEDAL detectors are based on two complementary techniq ues: nuclear-track detectors are sensitive to the high-ionisation signature expected fr om a monopole, and the new magnetic monopole trapper (MMT) relies on the stopping and trapping o f monopoles inside an aluminium array which is then analysed with a superconducting magneto meter. Preliminary results obtained with a subset of the MoEDAL MMT test array deployed in 2012 are presented, where monopoles with charge above the fundamental unit magnetic charge or ma ss above 1.5 TeV are probed for the first time at the LHC

  18. Conformal Infinity

    Directory of Open Access Journals (Sweden)

    Frauendiener Jörg

    2004-01-01

    Full Text Available The notion of conformal infinity has a long history within the research in Einstein's theory of gravity. Today, 'conformal infinity' is related to almost all other branches of research in general relativity, from quantisation procedures to abstract mathematical issues to numerical applications. This review article attempts to show how this concept gradually and inevitably evolved from physical issues, namely the need to understand gravitational radiation and isolated systems within the theory of gravitation, and how it lends itself very naturally to the solution of radiation problems in numerical relativity. The fundamental concept of null-infinity is introduced. Friedrich's regular conformal field equations are presented and various initial value problems for them are discussed. Finally, it is shown that the conformal field equations provide a very powerful method within numerical relativity to study global problems such as gravitational wave propagation and detection.

  19. A new way of visualising quantum fields

    Science.gov (United States)

    Linde, Helmut

    2018-05-01

    Quantum field theory (QFT) is the basis of some of the most fundamental theories in modern physics, but it is not an easy subject to learn. In the present article we intend to pave the way from quantum mechanics to QFT for students at early graduate or advanced undergraduate level. More specifically, we propose a new way of visualising the wave function Ψ of a linear chain of interacting quantum harmonic oscillators, which can be seen as a model for a simple one-dimensional bosonic quantum field. The main idea is to draw randomly chosen classical states of the chain superimposed upon each other and use a grey scale to represent the value of Ψ at the corresponding coordinates of the quantised system. Our goal is to establish a better intuitive understanding of the mathematical objects underlying quantum field theories and solid state physics.

  20. Imaging x-ray sources at a finite distance in coded-mask instruments

    International Nuclear Information System (INIS)

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction