WorldWideScience

Sample records for Lloyd-Max Quantiser Shannon Limit Source Coding Uniform Quantiser

  1. Hybrid 3D Fractal Coding with Neighbourhood Vector Quantisation

    Directory of Open Access Journals (Sweden)

    Zhen Yao

    2004-12-01

    Full Text Available A hybrid 3D compression scheme which combines fractal coding with neighbourhood vector quantisation for video and volume data is reported. While fractal coding exploits the redundancy present in different scales, neighbourhood vector quantisation, as a generalisation of translational motion compensation, is a useful method for removing both intra- and inter-frame coherences. The hybrid coder outperforms most of the fractal coders published to date while the algorithm complexity is kept relatively low.

  2. Uniform and Non-Uniform Optimum Scalar Quantizers Performances: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Fendy Santoso

    2008-05-01

    Full Text Available The aim of this research is to investigate source coding, the representation of information source output by finite R bits/symbol. The performance of optimum quantisers subject to an entropy constraint has been studied. The definitive work in this area is best summarised by Shannon’s source coding theorem, that is, a source with entropy H can be encoded with arbitrarily small error probability at any rate R (bits/source output as long as R>H. Conversely, If R the error probability will be driven away from zero, independent of the complexity of the encoder and the decoder employed. In this context, the main objective of engineers is however to design the optimum code. Unfortunately, the rate-distortion theorem does not provide the recipe for such a design. The theorem does, however, provide the theoretical limit so that we know how close we are to the optimum. The full understanding of the theorem also helps in setting the direction to achieve such an optimum. In this research, we have investigated the performances of two practical scalar quantisers, i.e., a Lloyd-Max quantiser and the uniformly defined one and also a well-known entropy coding scheme, i.e., Huffman coding against their theoretically attainable optimum performance due to Shannon’s limit R. It has been shown that our uniformly defined quantiser could demonstrate superior performance. The performance improvements, in fact, are more noticeable at higher bit rates.

  3. Stochastic quantisation: theme and variation

    International Nuclear Information System (INIS)

    Klauder, J.R.; Kyoto Univ.

    1987-01-01

    The paper on stochastic quantisation is a contribution to the book commemorating the sixtieth birthday of E.S. Fradkin. Stochastic quantisation reformulates Euclidean quantum field theory in the language of Langevin equations. The generalised free field is discussed from the viewpoint of stochastic quantisation. An artificial family of highly singular model theories wherein the space-time derivatives are dropped altogether is also examined. Finally a modified form of stochastic quantisation is considered. (U.K.)

  4. Are the gravitational waves quantised?

    International Nuclear Information System (INIS)

    Lovas, Istvan

    1997-01-01

    If the gravitational waves are classical objects then the value of their correlation function is 1. If they are quantised, then there exist two possibilities: the gravitational waves are either completely coherent, then their correlation function is again 1, or they are only partially coherent, then their correlation function is expected to deviate from 1. Unfortunately such a deviation is not a sufficient proof for the quantised character of the gravitational waves. If the gravitational waves are quantised and generated by the change of the background metrical then they can be in a squeezed state. In a squeezed state there is a chance for the correlation between the phase of the wave and the quantum fluctuations. The observation of such a correlation would be a genuine proof of the quantised character of the gravitational wave

  5. BRST Quantisation of Histories Electrodynamics

    OpenAIRE

    Noltingk, D.

    2001-01-01

    This paper is a continuation of earlier work where a classical history theory of pure electrodynamics was developed in which the the history fields have \\emph{five} components. The extra component is associated with an extra constraint, thus enlarging the gauge group of histories electrodynamics. In this paper we quantise the classical theory developed previously by two methods. Firstly we quantise the reduced classical history space, to obtain a reduced quantum history theory. Secondly we qu...

  6. Are the gravitational waves quantised?

    International Nuclear Information System (INIS)

    Lovas, I.

    1998-01-01

    The question whether gravitational waves are quantised or not can be investigated by the help of correlation measurements. If the gravitational waves are classical objects then the value of their correlation function is 1. However, if they are quantised, then there exist two possibilities: the gravitational waves are either completely coherent, then the correlation function is again 1, or they are partially coherent, then the correlation function is expected to deviate from 1. If the gravitational waves are generated by the change of the background metrics then they can be in a squeezed state. In a squeezed state there is a chance for the correlation between the phase of the wave and the quantum fluctuations. (author)

  7. Quantisation of super Teichmueller theory

    International Nuclear Information System (INIS)

    Aghaei, Nezhla; Hamburg Univ.; Pawelkiewicz, Michal; Techner, Joerg

    2015-12-01

    We construct a quantisation of the Teichmueller spaces of super Riemann surfaces using coordinates associated to ideal triangulations of super Riemann surfaces. A new feature is the non-trivial dependence on the choice of a spin structure which can be encoded combinatorially in a certain refinement of the ideal triangulation. By constructing a projective unitary representation of the groupoid of changes of refined ideal triangulations we demonstrate that the dependence of the resulting quantum theory on the choice of a triangulation is inessential.

  8. Gauge symmetries, topology, and quantisation

    International Nuclear Information System (INIS)

    Balachandran, A.P.

    1994-01-01

    The following two loosely connected sets of topics are reviewed in these lecture notes: (1) Gauge invariance, its treatment in field theories and its implications for internal symmetries and edge states such as those in the quantum Hall effect. (2) Quantisation on multiply connected spaces and a topological proof the spin-statistics theorem which avoids quantum field theory and relativity. Under (1), after explaining the meaning of gauge invariance and the theory of constraints, we discuss boundary conditions on gauge transformations and the definition of internal symmetries in gauge field theories. We then show how the edge states in the quantum Hall effect can be derived from the Chern-Simons action using the preceding ideas. Under (2), after explaining the significance of fibre bundles for quantum physics, we review quantisation on multiply connected spaces in detail, explaining also mathematical ideas such as those of the universal covering space and the fundamental group. These ideas are then used to prove the aforementioned topological spin-statistics theorem

  9. The quantisation and measurement of momentum observables

    International Nuclear Information System (INIS)

    Wan, K.K.; McFarlane, K.

    1980-01-01

    Mackey's scheme for the quantisation of classical momenta generating complete vector fields (complete momenta) is introduced, the differential operators corresponding to these momenta are introduced and discussed, and an isomorphism is shown to exist between the subclass of first-order self-adjoint differential operators, whose symmetric restrictions are essentially self-adjoint, and the complete classical momenta. Difficulties in the quantisation of incomplete momenta are discussed, and a critique given. Finally, in an attempt to relate the concept of completeness to measurability concepts of classical and quantum global measurability are introduced, and shown to require completeness. These results afford strong physical insight into the nature of complete momenta, and leads us to suggest a quantisability condition based upon global measurability. (author)

  10. Alternative to dead reckoning for model state quantisation when migrating to a quantised discrete

    CSIR Research Space (South Africa)

    Duvenhage, A

    2008-06-01

    Full Text Available Some progress has recently been made on migrating an existing distributed parallel discrete time simulator to a quantised discrete event architecture. The migration is done to increase the scale of the real-time simulations supported...

  11. Quantisation deforms w∞ to W∞ gravity

    International Nuclear Information System (INIS)

    Bergshoeff, E.; Howe, P.S.; State Univ. of New York, Stony Brook, NY; Pope, C.N.; Sezgin, E.; Shen, X.; Stelle, K.S.

    1991-01-01

    Quantising a classical theory of w ∞ gravity requires the introduction of an infinite number of counterterms in order to remove matter-dependent anomalies. We show that these counterterms correspond precisely to a renormalisation of the classical w ∞ currents to quantum w ∞ currents. (orig.)

  12. Factors Influencing Energy Quantisation | Adelabu | Global Journal ...

    African Journals Online (AJOL)

    Department of Physics, College of Science & Agriculture, University of Abuja, P. M. B. 117, Abuja FCT, Nigeria. Investigations of energy quantisation in a range of multiple quantum well (MQW) systems using effective mass band structure calculations including non-parabolicity in both the well and barrier layers are reported.

  13. Testing quantised inertia on emdrives with dielectrics

    Science.gov (United States)

    McCulloch, M. E.

    2017-05-01

    Truncated-cone-shaped cavities with microwaves resonating within them (emdrives) move slightly towards their narrow ends, in contradiction to standard physics. This effect has been predicted by a model called quantised inertia (MiHsC) which assumes that the inertia of the microwaves is caused by Unruh radiation, more of which is allowed at the wide end. Therefore, photons going towards the wide end gain inertia, and to conserve momentum the cavity must move towards its narrow end, as observed. A previous analysis with quantised inertia predicted a controversial photon acceleration, which is shown here to be unnecessary. The previous analysis also mispredicted the thrust in those emdrives with dielectrics. It is shown here that having a dielectric at one end of the cavity is equivalent to widening the cavity at that end, and when dielectrics are considered, then quantised inertia predicts these results as well as the others, except for Shawyer's first test where the thrust is predicted to be the right size but in the wrong direction. As a further test, quantised inertia predicts that an emdrive's thrust can be enhanced by using a dielectric at the wide end.

  14. Quantisation deforms w∞ to W∞ gravity

    NARCIS (Netherlands)

    Bergshoeff, E.; Howe, P.S.; Pope, C.N.; Sezgin, E.; Shen, X.; Stelle, K.S.

    1991-01-01

    Quantising a classical theory of w∞ gravity requires the introduction of an infinite number of counterterms in order to remove matter-dependent anomalies. We show that these counterterms correspond precisely to a renormalisation of the classical w∞ currents to quantum W∞ currents.

  15. Exact quantisation of the relativistic Hopfield model

    Energy Technology Data Exchange (ETDEWEB)

    Belgiorno, F., E-mail: francesco.belgiorno@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo 32, IT-20133 Milano (Italy); INdAM-GNFM (Italy); Cacciatori, S.L., E-mail: sergio.cacciatori@uninsubria.it [Department of Science and High Technology, Università dell’Insubria, Via Valleggio 11, IT-22100 Como (Italy); INFN sezione di Milano, via Celoria 16, IT-20133 Milano (Italy); Dalla Piazza, F., E-mail: f.dallapiazza@gmail.com [Università “La Sapienza”, Dipartimento di Matematica, Piazzale A. Moro 2, I-00185, Roma (Italy); Doronzo, M., E-mail: m.doronzo@uninsubria.it [Department of Science and High Technology, Università dell’Insubria, Via Valleggio 11, IT-22100 Como (Italy)

    2016-11-15

    We investigate the quantisation in the Heisenberg representation of a relativistically covariant version of the Hopfield model for dielectric media, which entails the interaction of the quantum electromagnetic field with the matter dipole fields, represented by a mesoscopic polarisation field. A full quantisation of the model is provided in a covariant gauge, with the aim of maintaining explicit relativistic covariance. Breaking of the Lorentz invariance due to the intrinsic presence in the model of a preferred reference frame is also taken into account. Relativistic covariance forces us to deal with the unphysical (scalar and longitudinal) components of the fields, furthermore it introduces, in a more tricky form, the well-known dipole ghost of standard QED in a covariant gauge. In order to correctly dispose of this contribution, we implement a generalised Lautrup trick. Furthermore, causality and the relation of the model with the Wightman axioms are also discussed.

  16. Time-space noncommutativity: quantised evolutions

    International Nuclear Information System (INIS)

    Balachandran, Aiyalam P.; Govindarajan, Thupil R.; Teotonio-Sobrinho, Paulo; Martins, Andrey Gomes

    2004-01-01

    In previous work, we developed quantum physics on the Moyal plane with time-space noncommutativity, basing ourselves on the work of Doplicher et al. Here we extend it to certain noncommutative versions of the cylinder, R 3 and Rx S 3 . In all these models, only discrete time translations are possible, a result known before in the first two cases. One striking consequence of quantised time translations is that even though a time independent hamiltonian is an observable, in scattering processes, it is conserved only modulo 2π/θ, where θ is the noncommutative parameter. (In contrast, on a one-dimensional periodic lattice of lattice spacing a and length L = Na, only momentum mod 2π/L is observable (and can be conserved).) Suggestions for further study of this effect are made. Scattering theory is formulated and an approach to quantum field theory is outlined. (author)

  17. Projective flatness in the quantisation of bosons and fermions

    Science.gov (United States)

    Wu, Siye

    2015-07-01

    We compare the quantisation of linear systems of bosons and fermions. We recall the appearance of projectively flat connection and results on parallel transport in the quantisation of bosons. We then discuss pre-quantisation and quantisation of fermions using the calculus of fermionic variables. We define a natural connection on the bundle of Hilbert spaces and show that it is projectively flat. This identifies, up to a phase, equivalent spinor representations constructed by various polarisations. We introduce the concept of metaplectic correction for fermions and show that the bundle of corrected Hilbert spaces is naturally flat. We then show that the parallel transport in the bundle of Hilbert spaces along a geodesic is a rescaled projection provided that the geodesic lies within the complement of a cut locus. Finally, we study the bundle of Hilbert spaces when there is a symmetry.

  18. Stochastic Automata for Outdoor Semantic Mapping using Optimised Signal Quantisation

    DEFF Research Database (Denmark)

    Caponetti, Fabio; Blas, Morten Rufus; Blanke, Mogens

    2011-01-01

    Autonomous robots require many types of information to obtain intelligent and safe behaviours. For outdoor operations, semantic mapping is essential and this paper proposes a stochastic automaton to localise the robot within the semantic map. For correct modelling and classi¯cation under...... uncertainty, this paper suggests quantising robotic perceptual features, according to a probabilistic description, and then optimising the quantisation. The proposed method is compared with other state-of-the-art techniques that can assess the con¯dence of their classi¯cation. Data recorded on an autonomous...

  19. 3D Model Retrieval Based on Vector Quantisation Index Histograms

    International Nuclear Information System (INIS)

    Lu, Z M; Luo, H; Pan, J S

    2006-01-01

    This paper proposes a novel technique to retrieval 3D mesh models using vector quantisation index histograms. Firstly, points are sampled uniformly on mesh surface. Secondly, to a point five features representing global and local properties are extracted. Thus feature vectors of points are obtained. Third, we select several models from each class, and employ their feature vectors as a training set. After training using LBG algorithm, a public codebook is constructed. Next, codeword index histograms of the query model and those in database are computed. The last step is to compute the distance between histograms of the query and those of the models in database. Experimental results show the effectiveness of our method

  20. Self-organised fractional quantisation in a hole quantum wire

    Science.gov (United States)

    Gul, Y.; Holmes, S. N.; Myronov, M.; Kumar, S.; Pepper, M.

    2018-03-01

    We have investigated hole transport in quantum wires formed by electrostatic confinement in strained germanium two-dimensional layers. The ballistic conductance characteristics show the regular staircase of quantum levels with plateaux at n2e 2/h, where n is an integer, e is the fundamental unit of charge and h is Planck’s constant. However as the carrier concentration is reduced, the quantised levels show a behaviour that is indicative of the formation of a zig-zag structure and new quantised plateaux appear at low temperatures. In units of 2e 2/h the new quantised levels correspond to values of n  =  1/4 reducing to 1/8 in the presence of a strong parallel magnetic field which lifts the spin degeneracy but does not quantise the wavefunction. A further plateau is observed corresponding to n  =  1/32 which does not change in the presence of a parallel magnetic field. These values indicate that the system is behaving as if charge was fractionalised with values e/2 and e/4, possible mechanisms are discussed.

  1. On the quantisation of one-dimensional bags

    International Nuclear Information System (INIS)

    Fairley, G.T.; Squires, E.J.

    1976-01-01

    The quantisation of one-dimensional MIT bags by expanding the fields as a sum of classical modes and truncating the series after the first term is discussed. The lowest states of a bag in a world containing two scalar quark fields are obtained. Problems associated with the zero-point oscillations of the field are discussed. (Auth.)

  2. Twistor quantisation of open strings in three dimensions

    International Nuclear Information System (INIS)

    Shaw, W.T.

    1987-01-01

    The paper treats the first quantisation of loops in real four-dimensional twistor space. Such loops correspond to open strings in three-dimensional spacetime. The geometry and reality structures pertaining to twistors in three dimensions are reviewed and the twistor description of null geodesics is presented as a prototype for the discussion of null curves. The classical twistor structure of null curves is then described. The symplectic structure is exhibited and used to investigate the constraint algebra. Expressions for the momentum operators are found. The symplectic structure defines natural canonical variables for covariant twistor quantisation and the consequences of carrying this out are described. A twistor representation of the Virasoro algebra with central charge 2 is found and some solutions of the quantum constraints are exhibited. (author)

  3. The Dirac quantisation condition for fluxes on four-manifolds

    International Nuclear Information System (INIS)

    Alvarez, M.; Olive, D.I.

    2000-01-01

    A systematic treatment is given of the Dirac quantisation condition for electromagnetic fluxes through two-cycles on a four-manifold space-time which can be very complicated topologically, provided only that it is connected, compact, oriented and smooth. This is sufficient for the quantised Maxwell theory on it to satisfy electromagnetic duality properties. The results depend upon whether the complex wave function needed for the argument is scalar or spinorial in nature. An essential step is the derivation of a ''quantum Stokes' theorem'' for the integral of the gauge potential around a closed loop on the manifold. This can only be done for an exponentiated version of the line integral (the ''Wilson loop'') and the result again depends on the nature of the complex wave functions, through the appearance of what is known as a Stiefel-Whitney cohomology class in the spinor case. A nice picture emerges providing a physical interpretation, in terms of quantised fluxes and wave-functions, of mathematical concepts such as spin structures, spin C structures, the Stiefel-Whitney class and Wu's formula. Relations appear between these, electromagnetic duality and the Atiyah-Singer index theorem. Possible generalisation to higher dimensions of space-time in the presence of branes are mentioned. (orig.)

  4. Global stabilisation of large-scale hydraulic networks with quantised and positive proportional controls

    DEFF Research Database (Denmark)

    Jensen, Tom Nørgaard; Wisniewski, Rafal

    2013-01-01

    a set of decentralised, logarithmic quantised and constrained control actions with properly designed quantisation parameters. That is, an attractor set with a compact basin of attraction exists. Subsequently, the basin can be increased by increasing the control gains. In our work, this result...... is extended by showing that an attractor set with a global basin of attraction exists for arbitrary values of positive control gains, given that the upper level of the quantiser is properly designed. Furthermore, the proof is given for general monotone quantisation maps. Since the basin of attraction...

  5. Sp(2) covariant quantisation of general gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez-Bello, J L

    1994-11-01

    The Sp(2) covariant quantization of gauge theories is studied. The geometrical interpretation of gauge theories in terms of quasi principal fibre bundles Q(M{sub s}, G{sub s}) is reviewed. It is then described the Sp(2) algebra of ordinary Yang-Mills theory. A consistent formulation of covariant Lagrangian quantisation for general gauge theories based on Sp(2) BRST symmetry is established. The original N = 1, ten dimensional superparticle is considered as an example of infinitely reducible gauge algebras, and given explicitly its Sp(2) BRST invariant action. (author). 18 refs.

  6. Sp(2) covariant quantisation of general gauge theories

    International Nuclear Information System (INIS)

    Vazquez-Bello, J.L.

    1994-11-01

    The Sp(2) covariant quantization of gauge theories is studied. The geometrical interpretation of gauge theories in terms of quasi principal fibre bundles Q(M s , G s ) is reviewed. It is then described the Sp(2) algebra of ordinary Yang-Mills theory. A consistent formulation of covariant Lagrangian quantisation for general gauge theories based on Sp(2) BRST symmetry is established. The original N = 1, ten dimensional superparticle is considered as an example of infinitely reducible gauge algebras, and given explicitly its Sp(2) BRST invariant action. (author). 18 refs

  7. Quantisation of the holographic Ricci dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Albarran, Imanol; Bouhmadi-López, Mariam, E-mail: imanol@ubi.pt, E-mail: mbl@ubi.pt [Departamento de Física, Universidade da Beira Interior, 6200 Covilhã (Portugal)

    2015-08-01

    While general relativity is an extremely robust theory to describe the gravitational interaction in our Universe, it is expected to fail close to singularities like the cosmological ones. On the other hand, it is well known that some dark energy models might induce future singularities; this can be the case for example within the setup of the Holographic Ricci Dark Energy model (HRDE). On this work, we perform a cosmological quantisation of the HRDE model and obtain under which conditions a cosmic doomsday can be avoided within the quantum realm. We show as well that this quantum model not only avoid future singularities but also the past Big Bang.

  8. Do the SuperKamiokande atmospheric neutrino results explain electric charge quantisation?

    International Nuclear Information System (INIS)

    Foot, R.; Volkas, R.R.

    1998-08-01

    It is shown that the SuperKamiokande atmospheric neutrino results explain electric charge quantisation, provided that the oscillation mode is ν μ → ν τ and that the neutrino mass is of the Majorana type. It is emphasised that neutrino oscillation and neutrinoless double beta decay experiments provide important information regarding the seemingly unrelated issue of electric charge quantisation

  9. Understanding the Quantum Computational Speed-up via De-quantisation

    Directory of Open Access Journals (Sweden)

    Cristian S. Calude

    2010-06-01

    Full Text Available While it seems possible that quantum computers may allow for algorithms offering a computational speed-up over classical algorithms for some problems, the issue is poorly understood. We explore this computational speed-up by investigating the ability to de-quantise quantum algorithms into classical simulations of the algorithms which are as efficient in both time and space as the original quantum algorithms. The process of de-quantisation helps formulate conditions to determine if a quantum algorithm provides a real speed-up over classical algorithms. These conditions can be used to develop new quantum algorithms more effectively (by avoiding features that could allow the algorithm to be efficiently classically simulated, as well as providing the potential to create new classical algorithms (by using features which have proved valuable for quantum algorithms. Results on many different methods of de-quantisations are presented, as well as a general formal definition of de-quantisation. De-quantisations employing higher-dimensional classical bits, as well as those using matrix-simulations, put emphasis on entanglement in quantum algorithms; a key result is that any algorithm in which the entanglement is bounded is de-quantisable. These methods are contrasted with the stabiliser formalism de-quantisations due to the Gottesman-Knill Theorem, as well as those which take advantage of the topology of the circuit for a quantum algorithm. The benefits of the different methods are contrasted, and the importance of a range of techniques is emphasised. We further discuss some features of quantum algorithms which current de-quantisation methods do not cover.

  10. On the relation between reduced quantisation and quantum reduction for spherical symmetry in loop quantum gravity

    International Nuclear Information System (INIS)

    Bodendorfer, N; Zipfel, A

    2016-01-01

    Building on a recent proposal for a quantum reduction to spherical symmetry from full loop quantum gravity, we investigate the relation between a quantisation of spherically symmetric general relativity and a reduction at the quantum level. To this end, we generalise the previously proposed quantum reduction by dropping the gauge fixing condition on the radial diffeomorphisms, thus allowing us to make direct contact with previous work on reduced quantisation. A dictionary between spherically symmetric variables and observables with respect to the reduction constraints in the full theory is discussed, as well as an embedding of reduced quantum states to a subsector of the quantum symmetry reduced full theory states. On this full theory subsector, the quantum algebra of the mentioned observables is computed and shown to qualitatively reproduce the quantum algebra of the reduced variables in the large quantum number limit for a specific choice of regularisation. Insufficiencies in recovering the reduced algebra quantitatively from the full theory are attributed to the oversimplified full theory quantum states we use. (paper)

  11. Formal Series of Generalised Functions and Their Application to Deformation Quantisation

    OpenAIRE

    Tosiek, Jaromir

    2016-01-01

    Foundations of the formal series $*$ -- calculus in deformation quantisation are discussed. Several classes of continuous linear functionals over algebras applied in classical and quantum physics are introduced. The notion of positivity in formal series calculus is proposed. Problems with defining quantum states over the set of formal series are analysed.

  12. Refined algebraic quantisation in a system with nonconstant gauge invariant structure functions

    International Nuclear Information System (INIS)

    Martínez-Pascual, Eric

    2013-01-01

    In a previous work [J. Louko and E. Martínez-Pascual, “Constraint rescaling in refined algebraic quantisation: Momentum constraint,” J. Math. Phys. 52, 123504 (2011)], refined algebraic quantisation (RAQ) within a family of classically equivalent constrained Hamiltonian systems that are related to each other by rescaling one momentum-type constraint was investigated. In the present work, the first steps to generalise this analysis to cases where more constraints occur are developed. The system under consideration contains two momentum-type constraints, originally abelian, where rescalings of these constraints by a non-vanishing function of the coordinates are allowed. These rescalings induce structure functions at the level of the gauge algebra. Providing a specific parametrised family of real-valued scaling functions, the implementation of the corresponding rescaled quantum momentum-type constraints is performed using RAQ when the gauge algebra: (i) remains abelian and (ii) undergoes into an algebra of a nonunimodular group with nonconstant gauge invariant structure functions. Case (ii) becomes the first example known to the author where an open algebra is handled in refined algebraic quantisation. Challenging issues that arise in the presence of non-gauge invariant structure functions are also addressed

  13. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  14. Φ -Ψ model for electrodynamics in dielectric media: exact quantisation in the Heisenberg representation

    Energy Technology Data Exchange (ETDEWEB)

    Belgiorno, Francesco [Politecnico di Milano, Dipartimento di Matematica, Milano (Italy); INdAM-GNFM, Milano (Italy); Cacciatori, Sergio L. [Universita dell' Insubria, Department of Science and High Technology, Como (Italy); INFN sezione di Milano, Milano (Italy); Dalla Piazza, Francesco [Universita ' ' La Sapienza' ' , Dipartimento di Matematica, Roma (Italy); Doronzo, Michele [Universita dell' Insubria, Department of Science and High Technology, Como (Italy)

    2016-06-15

    We investigate the quantisation in the Heisenberg representation of a model which represents a simplification of the Hopfield model for dielectric media, where the electromagnetic field is replaced by a scalar field φ and the role of the polarisation field is played by a further scalar field ψ. The model, which is quadratic in the fields, is still characterised by a non-trivial physical content, as the physical particles correspond to the polaritons of the standard Hopfield model of condensed matter physics. Causality is also taken into account and a discussion of the standard interaction representation is also considered. (orig.)

  15. Construction of quantised Higgs-like fields in two dimensions

    International Nuclear Information System (INIS)

    Albeverio, S.; Hoeegh-Krohn, R.; Holden, H.; Kolsrud, T.

    1989-01-01

    A mathematical construction of Higgs-like fields in two dimensions is presented, including passage to the continuum and infinite volume limits. In the limit, a quantum field theory obeying the Osterwalder-Schrader axioms is obtained. The method is based on representing the Schwinger functions in terms of stochastic multiplicative curve integrals and brownian bridges. (orig.)

  16. Quantum gravity in three dimensions, Witten spinors and the quantisation of length

    Science.gov (United States)

    Wieland, Wolfgang

    2018-05-01

    In this paper, I investigate the quantisation of length in euclidean quantum gravity in three dimensions. The starting point is the classical hamiltonian formalism in a cylinder of finite radius. At this finite boundary, a counter term is introduced that couples the gravitational field in the interior to a two-dimensional conformal field theory for an SU (2) boundary spinor, whose norm determines the conformal factor between the fiducial boundary metric and the physical metric in the bulk. The equations of motion for this boundary spinor are derived from the boundary action and turn out to be the two-dimensional analogue of the Witten equations appearing in Witten's proof of the positive mass theorem. The paper concludes with some comments on the resulting quantum theory. It is shown, in particular, that the length of a one-dimensional cross section of the boundary turns into a number operator on the Fock space of the theory. The spectrum of this operator is discrete and matches the results from loop quantum gravity in the spin network representation.

  17. Canonical quantisation via conditional symmetries of the closed FLRW model coupled to a scalar field

    International Nuclear Information System (INIS)

    Zampeli, Adamantia

    2015-01-01

    We study the classical, quantum and semiclassical solutions of a Robertson-Walker spacetime coupled to a massless scalar field. The Lagrangian of these minisuperspace models is singular and the application of the theory of Noether symmetries is modified to include the conditional symmetries of the corresponding (weakly vanishing) Hamiltonian. These are found to be the simultaneous symmetries of the supermetric and the superpotential. The quantisation is performed adopting the Dirac proposal for constrained systems. The innovation in the approach we use is that the integrals of motion related to the conditional symmetries are promoted to operators together with the Hamiltonian and momentum constraints. These additional conditions imposed on the wave function render the system integrable and it is possible to obtain solutions of the Wheeler-DeWitt equation. Finally, we use the wave function to perform a semiclassical analysis following Bohm and make contact with the classical solution. The analysis starts with a modified Hamilton-Jacobi equation from which the semiclassical momenta are defined. The solutions of the semiclassical equations are then studied and compared to the classical ones in order to understand the nature and behaviour of the classical singularities. (paper)

  18. Electric charge quantisation from gauge invariance of a Lagrangian: a catalogue of baryon number violating scalar interactions

    International Nuclear Information System (INIS)

    Bowes, J.P.; Foot, R.; Volkas, R.R.

    1997-01-01

    In gauge theories like the standard model, the electric charges of the fermions can be heavily constrained from the classical structure of the theory and from the cancellation of anomalies. There is however mounting evidence suggesting that these anomaly constraints are not as well motivated as the classical constraints. In light of this, possible modifications of the minimal standard model are discussed which will give a complete electric charge quantisation from classical constraints alone. Because these modifications to the Standard Model involve the consideration of baryon number violating scalar interactions, a complete catalogue of the simplest ways to modify the Standard Model is presented so as to introduce explicit baryon number violation. 9 refs., 7 figs

  19. The principle of the indistinguishability of identical particles and the Lie algebraic approach to the field quantisation

    International Nuclear Information System (INIS)

    Govorkov, A.B.

    1980-01-01

    The density matrix, rather than the wavefunction describing the system of a fixed number of non-relativistic identical particles, is subject to the second quantisation. Here the bilinear operators which move a particle from a given state to another appear and satisfy the Lie algebraic relations of the unitary group SU(rho) when the dimension rho→infinity. The drawing into consideration of the system with a variable number of particles implies the extension of this algebra into one of the simple Lie algebras of classical (orthogonal, symplectic or unitary) groups in the even-dimensional spaces. These Lie algebras correspond to the para-Fermi-, para-Bose- and para-uniquantisation of fields, respectively. (author)

  20. Electric charge quantisation from gauge invariance of a Lagrangian: a catalogue of baryon number violating scalar interactions

    Energy Technology Data Exchange (ETDEWEB)

    Bowes, J.P.; Foot, R.; Volkas, R.R.

    1997-06-01

    In gauge theories like the standard model, the electric charges of the fermions can be heavily constrained from the classical structure of the theory and from the cancellation of anomalies. There is however mounting evidence suggesting that these anomaly constraints are not as well motivated as the classical constraints. In light of this, possible modifications of the minimal standard model are discussed which will give a complete electric charge quantisation from classical constraints alone. Because these modifications to the Standard Model involve the consideration of baryon number violating scalar interactions, a complete catalogue of the simplest ways to modify the Standard Model is presented so as to introduce explicit baryon number violation. 9 refs., 7 figs.

  1. Box-counting dimension revisited: presenting an efficient method of minimising quantisation error and an assessment of the self-similarity of structural root systems

    Directory of Open Access Journals (Sweden)

    Martin eBouda

    2016-02-01

    Full Text Available Fractal dimension (FD, estimated by box-counting, is a metric used to characterise plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantisation error (QE, which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterise the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitised in 3D and subjected to box-counts. A pattern search algorithm was used to minimise QE by optimising grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates.QE due to both grid position and orientation was a significant source of error in FD estimates, but pattern search provided an efficient means of minimising it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitisations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did

  2. Regeneration limit of classical Shannon capacity

    Science.gov (United States)

    Sorokina, M. A.; Turitsyn, S. K.

    2014-05-01

    Since Shannon derived the seminal formula for the capacity of the additive linear white Gaussian noise channel, it has commonly been interpreted as the ultimate limit of error-free information transmission rate. However, the capacity above the corresponding linear channel limit can be achieved when noise is suppressed using nonlinear elements; that is, the regenerative function not available in linear systems. Regeneration is a fundamental concept that extends from biology to optical communications. All-optical regeneration of coherent signal has attracted particular attention. Surprisingly, the quantitative impact of regeneration on the Shannon capacity has remained unstudied. Here we propose a new method of designing regenerative transmission systems with capacity that is higher than the corresponding linear channel, and illustrate it by proposing application of the Fourier transform for efficient regeneration of multilevel multidimensional signals. The regenerative Shannon limit—the upper bound of regeneration efficiency—is derived.

  3. Quantisation of monotonic twist maps

    International Nuclear Information System (INIS)

    Boasman, P.A.; Smilansky, U.

    1993-08-01

    Using an approach suggested by Moser, classical Hamiltonians are generated that provide an interpolating flow to the stroboscopic motion of maps with a monotonic twist condition. The quantum properties of these Hamiltonians are then studied in analogy with recent work on the semiclassical quantization of systems based on Poincare surfaces of section. For the generalized standard map, the correspondence with the usual classical and quantum results is shown, and the advantages of the quantum Moser Hamiltonian demonstrated. The same approach is then applied to the free motion of a particle on a 2-torus, and to the circle billiard. A natural quantization condition based on the eigenphases of the unitary time--development operator is applied, leaving the exact eigenvalues of the torus, but only the semiclassical eigenvalues for the billiard; an explanation for this failure is proposed. It is also seen how iterating the classical map commutes with the quantization. (authors)

  4. Examination of the 0.7(2e2/h) feature in the quantised conduction of a quantum point contact: varying the effective g-factor with hydrostatic pressure

    International Nuclear Information System (INIS)

    Wirtz, R; Taylor, R.P.; Newbury, R.; Nicholls, J.T.; Tribe, W.R.; Simmons, M.Y.

    1999-01-01

    Full text: The conductance of a quasi one-dimensional channel defined by a split-gate quantum point contact (QPC) on the surface of a AlGaAs/GaAs heterostructure shows quantised steps at n(2e 2 /h) where n is an integer. This experimental result is due to the reduction of the number of current carrying one-dimensional subbands caused by narrowing the QPC. The theoretical explanation however does not take electron-electron interactions into account. Recently Thomas et al. discovered a new feature at non-integral value of n ∼ 0.7 in very low-disorder samples (μ ∼ 450 m 2 V -1 s -1 ) which may originate from electron-electron interactions (e.g. spin polarisation at zero magnetic field). We are currently investigating the 0.7 feature as a function of applied hydrostatic pressure. Hydrostatic pressure affects the band structure and therefore the effective mass and the effective g-factor. In the case of bulk GaAs hydrostatic pressure reduces the magnitude of the effective g-factor, reaching a value of zero at approximately 1.7x10 9 Pa. Using a non-magnetic BeCu clamp-cell we achieve pressures up to 1 x 10 9 Pa, reducing the effective g-factor by more than 60%, in a temperature range 30mK to 300K and at magnetic fields up to 17T. We are therefore able to map the 0.7 feature as a function of p,T and B to assess the evidence for an electron-electron interaction driven origin of the 0.7 feature. We will present the preliminary results of our measurements

  5. Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies

    Directory of Open Access Journals (Sweden)

    Aakanshi Gupta

    2018-05-01

    Full Text Available The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy. By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE. The values of model performance statistics ( R 2 , adjusted R 2 , Mean Square Error (MSE and standard error also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers.

  6. A development of the Gibbs potential of a quantised system made up of a large number of particles. III. The contribution of binary collisions

    International Nuclear Information System (INIS)

    BLOCH, Claude; DE DOMINICIS, Cyrano

    1959-01-01

    Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low-density systems at low temperature. In the zero density limit, it reduces to the Beth-Uhlenbeck expression for the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp ( β / Δ /), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). It satisfies an equation generalizing the Bethe-Goldstone equation for an arbitrary temperature. Reprint of a paper published in Nuclear Physics, 10, p. 509-526, 1959

  7. Dynamics of quantised vortices in superfluids

    CERN Document Server

    Sonin, Edouard B

    2016-01-01

    A comprehensive overview of the basic principles of vortex dynamics in superfluids, this book addresses the problems of vortex dynamics in all three superfluids available in laboratories (4He, 3He, and BEC of cold atoms) alongside discussions of the elasticity of vortices, forces on vortices, and vortex mass. Beginning with a summary of classical hydrodynamics, the book guides the reader through examinations of vortex dynamics from large scales to the microscopic scale. Topics such as vortex arrays in rotating superfluids, bound states in vortex cores and interaction of vortices with quasiparticles are discussed. The final chapter of the book considers implications of vortex dynamics to superfluid turbulence using simple scaling and symmetry arguments. Written from a unified point of view that avoids complicated mathematical approaches, this text is ideal for students and researchers working with vortex dynamics in superfluids, superconductors, magnetically ordered materials, neutron stars and cosmological mo...

  8. Evidence for Quantisation in Planetary Ring Systems

    OpenAIRE

    WAYTE, RICHARD

    2017-01-01

    Absolute radial positions of the main features in Saturn's ring system have been calculated by adapting the quantum theory of atomic spectra. Fine rings superimposed upon broad rings are found to be covered by a harmonic series of the form N α A(r)1/2, where N and A are integers. Fourier analysis of the ring system shows that the spectral amplitude fits a response profile which is characteristic of a resonant system. Rings of Jupiter, Uranus and Neptune also obey the same rules. Involvement o...

  9. Random amino acid mutations and protein misfolding lead to Shannon limit in sequence-structure communication.

    Directory of Open Access Journals (Sweden)

    Andreas Martin Lisewski

    2008-09-01

    Full Text Available The transmission of genomic information from coding sequence to protein structure during protein synthesis is subject to stochastic errors. To analyze transmission limits in the presence of spurious errors, Shannon's noisy channel theorem is applied to a communication channel between amino acid sequences and their structures established from a large-scale statistical analysis of protein atomic coordinates. While Shannon's theorem confirms that in close to native conformations information is transmitted with limited error probability, additional random errors in sequence (amino acid substitutions and in structure (structural defects trigger a decrease in communication capacity toward a Shannon limit at 0.010 bits per amino acid symbol at which communication breaks down. In several controls, simulated error rates above a critical threshold and models of unfolded structures always produce capacities below this limiting value. Thus an essential biological system can be realistically modeled as a digital communication channel that is (a sensitive to random errors and (b restricted by a Shannon error limit. This forms a novel basis for predictions consistent with observed rates of defective ribosomal products during protein synthesis, and with the estimated excess of mutual information in protein contact potentials.

  10. Active Fault Near-Source Zones Within and Bordering the State of California for the 1997 Uniform Building Code

    Science.gov (United States)

    Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.

    2000-01-01

    The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.

  11. School Dress Codes and Uniform Policies.

    Science.gov (United States)

    Anderson, Wendell

    2002-01-01

    Opinions abound on what students should wear to class. Some see student dress as a safety issue; others see it as a student-rights issue. The issue of dress codes and uniform policies has been tackled in the classroom, the boardroom, and the courtroom. This Policy Report examines the whole fabric of the debate on dress codes and uniform policies…

  12. Uniform emergency codes: will they improve safety?

    Science.gov (United States)

    2005-01-01

    There are pros and cons to uniform code systems, according to emergency medicine experts. Uniformity can be a benefit when ED nurses and other staff work at several facilities. It's critical that your staff understand not only what the codes stand for, but what they must do when codes are called. If your state institutes a new system, be sure to hold regular drills to familiarize your ED staff.

  13. On the Fock quantisation of the hydrogen atom

    International Nuclear Information System (INIS)

    Cordani, B.

    1989-01-01

    In a celebrated work, Fock explained the degeneracy of the energy levels of the Kepler problem (or hydrogen atom) (Z. Phys. 98, 145-54, 1935) in terms of the dynamical symmetry group SO(4). Making a stereographic projection in the momentum space and rescaling the momenta with the eigenvalues of the energy, he showed that the problem is equivalent to the geodesic flow on the sphere S 3 . In this way, the 'hidden' symmetry SO(4) is made manifest. The present author has shown that the classical n-dimensional Kepler problem can be better understood by enlarging the phase space of the geodesical motion on S'' and including time and energy as canonical variables: a following symplectomorphism transforms the motion on S'' in the Kepler problem. We want to prove in this paper that the Fock procedure is the implementation at 'quantum' level of the above-mentioned symplectomorphism. The interest is not restricted to the old Kepler problem: more recently two other systems exhibiting the same symmetries have been found. They are the McIntosh-Cisneros-Zwanziger system and the geodesic motion in Euclidean Taub-NUT space. Both have a physical interest: they indeed describe a spinless test particle moving outside the core of a self-dual monopole and the asymptotic scattering of two self-dual monopoles, respectively. (author)

  14. Student Dress Codes and Uniforms. Research Brief

    Science.gov (United States)

    Johnston, Howard

    2009-01-01

    According to an Education Commission of the States "Policy Report", research on the effects of dress code and school uniform policies is inconclusive and mixed. Some researchers find positive effects; others claim no effects or only perceived effects. While no state has legislatively mandated the wearing of school uniforms, 28 states and…

  15. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  16. Devaney's chaos on uniform limit maps

    International Nuclear Information System (INIS)

    Yan Kesong; Zeng Fanping; Zhang Gengrong

    2011-01-01

    Highlights: → The transitivity may not been inherited even if the sequence functions mixing. → The sensitivity may not been inherited even if the iterates of sequence have some uniform convergence. → Some equivalence conditions for the transitivity and sensitivity for uniform limit function are given. → A non-transitive sequence may converge uniformly to a transitive map. - Abstract: Let (X, d) be a compact metric space and f n : X → X a sequence of continuous maps such that (f n ) converges uniformly to a map f. The purpose of this paper is to study the Devaney's chaos on the uniform limit f. On the one hand, we show that f is not necessarily transitive even if all f n mixing, and the sensitive dependence on initial conditions may not been inherited to f even if the iterates of the sequence have some uniform convergence, which correct two wrong claims in . On the other hand, we give some equivalence conditions for the uniform limit f to be transitive and to have sensitive dependence on initial conditions. Moreover, we present an example to show that a non-transitive sequence may converge uniformly to a transitive map.

  17. Limiting precision in differential equation solvers. II Sources of trouble and starting a code

    International Nuclear Information System (INIS)

    Shampine, L.F.

    1978-01-01

    The reasons a class of codes for solving ordinary differential equations might want to use an extremely small step size are investigated. For this class the likelihood of precision difficulties is evaluated and remedies examined. The investigations suggests a way of selecting automatically an initial step size which should be reliably on scale

  18. Shannon Meets Fick on the Microfluidic Channel: Diffusion Limit to Sum Broadcast Capacity for Molecular Communication.

    Science.gov (United States)

    Bicen, A Ozan; Lehtomaki, Janne J; Akyildiz, Ian F

    2018-03-01

    Molecular communication (MC) over a microfluidic channel with flow is investigated based on Shannon's channel capacity theorem and Fick's laws of diffusion. Specifically, the sum capacity for MC between a single transmitter and multiple receivers (broadcast MC) is studied. The transmitter communicates by using different types of signaling molecules with each receiver over the microfluidic channel. The transmitted molecules propagate through microfluidic channel until reaching the corresponding receiver. Although the use of different types of molecules provides orthogonal signaling, the sum broadcast capacity may not scale with the number of the receivers due to physics of the propagation (interplay between convection and diffusion based on distance). In this paper, the performance of broadcast MC on a microfluidic chip is characterized by studying the physical geometry of the microfluidic channel and leveraging the information theory. The convergence of the sum capacity for microfluidic broadcast channel is analytically investigated based on the physical system parameters with respect to the increasing number of molecular receivers. The analysis presented here can be useful to predict the achievable information rate in microfluidic interconnects for the biochemical computation and microfluidic multi-sample assays.

  19. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  20. Coded aperture imaging with uniformly redundant arrays

    International Nuclear Information System (INIS)

    Fenimore, E.E.; Cannon, T.M.

    1980-01-01

    A system is described which uses uniformly redundant arrays to image non-focusable radiation. The array is used in conjunction with a balanced correlation technique to provide a system with no artifacts so that virtually limitless signal-to-noise ratio is obtained with high transmission characteristics. The array is mosaicked to reduce required detector size over conventional array detectors. 15 claims

  1. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  2. Politicas de uniformes y codigos de vestuario (Uniforms and Dress-Code Policies). ERIC Digest.

    Science.gov (United States)

    Lumsden, Linda

    This digest in Spanish examines schools' dress-code policies and discusses the legal considerations and research findings about the effects of such changes. Most revisions to dress codes involve the use of uniforms, typically as a way to curb school violence and create a positive learning environment. A recent survey of secondary school principals…

  3. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  4. Long GRBs sources population non-uniformity

    Science.gov (United States)

    Arkhangelskaja, Irene

    Long GRBs observed in the very wide energy band. It is possible to separate two subsets of GRBs with high energy component (E > 500 MeV) presence. First type events energy spectra in low and high energy intervals are similar (as for GRB 021008) and described by Band, power law or broken power law models look like to usual bursts without emission in tens MeV region. For example, Band spectrum of GRB080916C covering 6 orders of magnitude. Second ones contain new additional high energy spectral component (for example, GRB 050525B and GRB 090902B). Both types of GRBs observed since CGRO mission beginning. The low energy precursors existence are typical for all types bursts. Both types of bursts temporal profiles can be similar in the various energy regions during some events or different in other cases. The absence of hard to soft evolution in low energy band and (or) presence of high energy precursors for some events are the special features of second class of GRBs by the results of preliminary data analysis and this facts gives opportunities to suppose differences between these two GRBs subsets sources. Also the results of long GRB redshifts distribution analysis have shown its shape contradiction to uniform population objects one for our Metagalaxy to both total and various redshifts definition methods GRBs sources samples. These evidences allow making preliminary conclusion about non-uniformity of long GRBs sources population.

  5. A development of the Gibbs potential of a quantised system made up of a large number of particles. III. The contribution of binary collisions; Un developpement du potentiel de Gibbs d'un systeme quantique compose d'un grand nombre de particules. III- La contribution des collisions binaires

    Energy Technology Data Exchange (ETDEWEB)

    BLOCH, Claude; DE DOMINICIS, Cyrano [Commissariat a l' energie atomique et aux energies alternatives - CEA, Centre d' etudes Nucleaires de Saclay, Gif-sur-Yvette (France)

    1959-07-01

    Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low-density systems at low temperature. In the zero density limit, it reduces to the Beth-Uhlenbeck expression for the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp ( β / Δ /), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). It satisfies an equation generalizing the Bethe-Goldstone equation for an arbitrary temperature. Reprint of a paper published in Nuclear Physics, 10, p. 509-526, 1959.

  6. Evasive levels in quantisation through wavepacket coupling: a semi-classical investigation

    International Nuclear Information System (INIS)

    Amiot, P.; Giraud, B.

    1984-01-01

    A new method is presented to introduce classical mechanics elements into the problem of obtaining the spectrum of an operator H-circumflex(p-circumflex, q-circumflex). A finite-rank functional space is created by centering complex wavepackets on a discrete number of points on an equi-energy of the classical H(p,q) and by placing real wavepackets in the classically forbidden region. The latter span the active subspace, P, and the former the inactive subspace, Q, for an application of the method of Bloch-Horowitz. A semi-classical study of the Green function in the inactive subspace Q, classically allowed, gives a clear explanation of this phenomenon and sheds new light on the significance of this semi-classical approximation for the propagator. An extension to the problem of barrier penetration is proposed. (author)

  7. Coupling n-level Atoms with l-modes of Quantised Light in a Resonator

    International Nuclear Information System (INIS)

    Castaños, O; Cordero, S; Nahmad-Achar, E; López-Peña, R

    2016-01-01

    We study the quantum phase transitions associated to the Hamiltonian of a system of n-level atoms interacting with l modes of electromagnetic radiation in a resonator. The quantum phase diagrams are determined in analytic form by means of a variational procedure where the test function is constructed in terms of a tensorial product of coherent states describing the matter and the radiation field. We demonstrate that the system can be reduced to a set of Dicke models. (paper)

  8. Two-dimensional quantisation of the quasi-Landau hydrogenic spectrum

    International Nuclear Information System (INIS)

    Gallas, J.A.C.; O'Connell, R.F.

    1982-01-01

    Based on the two-dimensional WKB model, an equation is derived from which the non-relativistic quasi-Landau energy spectrum of hydrogen-like atoms may be easily obtained. In addition, the solution of radial equations in the WKB approximation and its relation with models recently used to fit experimental data are discussed. (author)

  9. A Quantised State Systems Approach for Jacobian Free Extended Kalman Filtering

    DEFF Research Database (Denmark)

    Alminde, Lars; Bendtsen, Jan Dimon; Stoustrup, Jakob

    2007-01-01

    Model based methods for control of intelligent autonomous systems rely on a state estimate being available. One of the most common methods to obtain a state estimate for non-linear systems is the Extended Kalman Filter (EKF) algorithm. In order to apply the EKF an expression must be available...

  10. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  11. Impact of School Uniforms on Student Discipline and the Learning Climate: A Comparative Case Study of Two Middle Schools with Uniform Dress Codes and Two Middle Schools without Uniform Dress Codes

    Science.gov (United States)

    Dulin, Charles Dewitt

    2016-01-01

    The purpose of this research is to evaluate the impact of uniform dress codes on a school's climate for student behavior and learning in four middle schools in North Carolina. The research will compare the perceptions of parents, teachers, and administrators in schools with uniform dress codes against schools without uniform dress codes. This…

  12. Computer code determination of tolerable accel current and voltage limits during startup of an 80 kV MFTF sustaining neutral beam source

    International Nuclear Information System (INIS)

    Mayhall, D.J.; Eckard, R.D.

    1979-01-01

    We have used a Lawrence Livermore Laboratory (LLL) version of the WOLF ion source extractor design computer code to determine tolerable accel current and voltage limits during startup of a prototype 80 kV Mirror Fusion Test Facility (MFTF) sustaining neutral beam source. Arc current limits are also estimated. The source extractor has gaps of 0.236, 0.721, and 0.155 cm. The effective ion mass is 2.77 AMU. The measured optimum accel current density is 0.266 A/cm 2 . The gradient grid electrode runs at 5/6 V/sub a/ (accel voltage). The suppressor electrode voltage is zero for V/sub a/ < 3 kV and -3 kV for V/sub a/ greater than or equal to 3 kV. The accel current density for optimum beam divergence is obtained for 1 less than or equal to V/sub a/ less than or equal to 80 kV, as are the beam divergence and emittance

  13. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  14. Isodose distributions and dose uniformity in the Portuguese gamma irradiation facility calculated using the MCNP code

    CERN Document Server

    Oliveira, C

    2001-01-01

    A systematic study of isodose distributions and dose uniformity in sample carriers of the Portuguese Gamma Irradiation Facility was carried out using the MCNP code. The absorbed dose rate, gamma flux per energy interval and average gamma energy were calculated. For comparison purposes, boxes filled with air and 'dummy' boxes loaded with layers of folded and crumpled newspapers to achieve a given value of density were used. The magnitude of various contributions to the total photon spectra, including source-dependent factors, irradiator structures, sample material and other origins were also calculated.

  15. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  16. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  17. State estimation for networked control systems using fixed data rates

    Science.gov (United States)

    Liu, Qing-Quan; Jin, Fang

    2017-07-01

    This paper investigates state estimation for linear time-invariant systems where sensors and controllers are geographically separated and connected via a bandwidth-limited and errorless communication channel with the fixed data rate. All plant states are quantised, coded and converted together into a codeword in our quantisation and coding scheme. We present necessary and sufficient conditions on the fixed data rate for observability of such systems, and further develop the data-rate theorem. It is shown in our results that there exists a quantisation and coding scheme to ensure observability of the system if the fixed data rate is larger than the lower bound given, which is less conservative than the one in the literature. Furthermore, we also examine the role that the disturbances have on the state estimation problem in the case with data-rate limitations. Illustrative examples are given to demonstrate the effectiveness of the proposed method.

  18. Infinite Shannon entropy

    International Nuclear Information System (INIS)

    Baccetti, Valentina; Visser, Matt

    2013-01-01

    Even if a probability distribution is properly normalizable, its associated Shannon (or von Neumann) entropy can easily be infinite. We carefully analyze conditions under which this phenomenon can occur. Roughly speaking, this happens when arbitrarily small amounts of probability are dispersed into an infinite number of states; we shall quantify this observation and make it precise. We develop several particularly simple, elementary, and useful bounds, and also provide some asymptotic estimates, leading to necessary and sufficient conditions for the occurrence of infinite Shannon entropy. We go to some effort to keep technical computations as simple and conceptually clear as possible. In particular, we shall see that large entropies cannot be localized in state space; large entropies can only be supported on an exponentially large number of states. We are for the time being interested in single-channel Shannon entropy in the information theoretic sense, not entropy in a stochastic field theory or quantum field theory defined over some configuration space, on the grounds that this simple problem is a necessary precursor to understanding infinite entropy in a field theoretic context. (paper)

  19. Networked control of discrete-time linear systems over lossy digital communication channels

    Science.gov (United States)

    Jin, Fang; Zhao, Guang-Rong; Liu, Qing-Quan

    2013-12-01

    This article addresses networked control problems for linear time-invariant systems. The insertion of the digital communication network inevitably leads to packet dropout, time delay and quantisation error. Due to data rate limitations, quantisation error is not neglected. In particular, the case where the sensors and controllers are geographically separated and connected via noisy, bandwidth-limited digital communication channels is considered. A fundamental limitation on the data rate of the channel for mean-square stabilisation of the closed-loop system is established. Sufficient conditions for mean-square stabilisation are derived. It is shown that there exists a quantisation, coding and control scheme to stabilise the unstable system over packet dropout communication channels if the data rate is larger than the lower bound proposed in our result. An illustrative example is given to demonstrate the effectiveness of the proposed conditions.

  20. Comparison between two methodologies for uniformity correction of extensive reference sources

    International Nuclear Information System (INIS)

    Junior, Iremar Alves S.; Siqueira, Paulo de T.D.; Vivolo, Vitor; Potiens, Maria da Penha A.; Nascimento, Eduardo

    2016-01-01

    This article presents the procedures to obtain the uniformity correction factors for extensive reference sources proposed by two different methodologies. The first methodology is presented by the Good Practice Guide of Nº 14 of the NPL, which provides a numerical correction. The second one uses the radiation transport code, MCNP5, to obtain the correction factor. Both methods retrieve very similar corrections factor values, with a maximum deviation of 0.24%. (author)

  1. Regaining Weaver and Shannon

    Directory of Open Access Journals (Sweden)

    Gary Genosko

    2008-01-01

    Full Text Available My claim is that communication considered from the standpoint of how it is modeled must not only reckon with Claude E. Shannon and Warren Weaver but regain their pioneering efforts in new ways. I want to regain two neglected features. I signal these ends by simply reversing the order in which their names commonly appear.First, the recontextualization of Shannon and Weaver requires an investigation of the technocultural scene of information ‘handling’ embedded in their groundbreaking postwar labours; not incidentally, it was Harold D. Lasswell, whose work in the 1940s is often linked with Shannon and Weaver’s, who made a point of distinguishing between those who affect the content of messages (controllers as opposed to those who handle without modifying (other than accidentally such messages. Although it will not be possible to maintain such a hard and fast distinction that ignores scenes of encoding and decoding, Lasswell’s (1964: 42-3 examples of handlers include key figures such as ‘dispatchers, linemen, and messengers connected with telegraphic communication’ whose activities will prove to be important for my reading of the Shannon and Weaver essays. Telegraphy and its occupational cultures are the technosocial scenes informing the Shannon and Weaver model.Second, I will pay special attention to Weaver’s contribution, despite a tendency to erase him altogether by means of a general scientific habit of listing the main author first and then attributing authorship only to the first name on the list (although this differs within scientific disciplines, particularly in the health field where the name of the last author is in the lead, so to speak. I begin with a displacement of hierarchy and authority. I am inclined to simply state for those who, in the manner of Sherlock Holmes, ‘know my method’, that I focus my attention on the less well-known half of thinking pairs – on Roger Caillois instead of Georges Bataille, on F

  2. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  3. Breakdown of the dissipationless quantum Hall state: Quantised steps and analogies with classical and quantum fluid dynamics

    International Nuclear Information System (INIS)

    Eaves, L.

    2001-01-01

    The breakdown of the integer quantum Hall effect at high currents sometimes occurs a series of regular steps in the dissipative voltage drop bars used to maintain the US Resistance Standard, but have also been reported in other devices. It is proposed that the origin of the steps can be understood in terms of instability in the dissipationless flow at high electron drift velocities. The instability is induced by impurity- or defect- related inter-Landau level scattering processes in local macroscopic regions of the Hall bar. Electron-hole pairs (magneto-excitons) are generated in the quantum Hall fluid in these regions and that the electronic motion can be envisaged as a quantum analogue of the Karman vortex street which forms when a classical fluid flows past an obstacle. (author)

  4. Coded aperture imaging: the modulation transfer function for uniformly redundant arrays

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1980-01-01

    Coded aperture imaging uses many pinholes to increase the SNR for intrinsically weak sources when the radiation can be neither reflected nor refracted. Effectively, the signal is multiplexed onto an image and then decoded, often by a computer, to form a reconstructed image. We derive the modulation transfer function (MTF) of such a system employing uniformly redundant arrays (URA). We show that the MTF of a URA system is virtually the same as the MTF of an individual pinhole regardless of the shape or size of the pinhole. Thus, only the location of the pinholes is important for optimum multiplexing and decoding. The shape and size of the pinholes can then be selected based on other criteria. For example, one can generate self-supporting patterns, useful for energies typically encountered in the imaging of laser-driven compressions or in soft x-ray astronomy. Such patterns contain holes that are all the same size, easing the etching or plating fabrication efforts for the apertures. A new reconstruction method is introduced called delta decoding. It improves the resolution capabilities of a coded aperture system by mitigating a blur often introduced during the reconstruction step

  5. Present state of the SOURCES computer code

    International Nuclear Information System (INIS)

    Shores, Erik F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  6. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  7. Current limitation and formation of plasma double layers in a non-uniform magnetic field

    International Nuclear Information System (INIS)

    Plamondon, R.; Teichmann, J.; Torven, S.

    1986-07-01

    Formation of strong double layers has been observed experimentally in a magnetised plasma column maintained by a plasma source. The magnetic field is approximately axially homogenous except in a region at the anode where the electric current flows into a magnetic mirror. The double layer has a stationary position only in the region of non-uniform magnetic field or at the aperture separating the source and the plasma column. It is characterized by a negative differential resistance in the current-voltage characteristic of the device. The parameter space,where the double layer exists, has been studied as well as the corresponding potential profiles and fluctuation spectra. The electric current and the axial electric field are oppositely directed between the plasma source and a potential minimum which is formed in the region of inhomogeneous magnetic field. Electron reflection by the resulting potential barrier is found to be an important current limitation mechanism. (authors)

  8. Advancing Shannon Entropy for Measuring Diversity in Systems

    Directory of Open Access Journals (Sweden)

    R. Rajaram

    2017-01-01

    Full Text Available From economic inequality and species diversity to power laws and the analysis of multiple trends and trajectories, diversity within systems is a major issue for science. Part of the challenge is measuring it. Shannon entropy H has been used to rethink diversity within probability distributions, based on the notion of information. However, there are two major limitations to Shannon’s approach. First, it cannot be used to compare diversity distributions that have different levels of scale. Second, it cannot be used to compare parts of diversity distributions to the whole. To address these limitations, we introduce a renormalization of probability distributions based on the notion of case-based entropy Cc as a function of the cumulative probability c. Given a probability density p(x, Cc measures the diversity of the distribution up to a cumulative probability of c, by computing the length or support of an equivalent uniform distribution that has the same Shannon information as the conditional distribution of p^c(x up to cumulative probability c. We illustrate the utility of our approach by renormalizing and comparing three well-known energy distributions in physics, namely, the Maxwell-Boltzmann, Bose-Einstein, and Fermi-Dirac distributions for energy of subatomic particles. The comparison shows that Cc is a vast improvement over H as it provides a scale-free comparison of these diversity distributions and also allows for a comparison between parts of these diversity distributions.

  9. Uniform sources of ionizing radiation of extended area from radiotoned photographic film

    International Nuclear Information System (INIS)

    Thackray, M.

    1978-01-01

    The technique of toning photographic films, that have been uniformly exposed and developed, with radionuclides to provide uniform sources of ionizing radiation of extended area and their uses in radiography are discussed. The suitability of various radionuclides for uniform-plane sources is considered. (U.K.)

  10. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  11. Einstein, Podolsky, Rosen, and Shannon

    OpenAIRE

    Peres, Asher

    2003-01-01

    The EPR paradox (1935) is reexamined in the light of Shannon's information theory (1948). The EPR argument did not take into account that the observers' information was localized, like any other physical object.

  12. Fractional Calculus and Shannon Wavelet

    Directory of Open Access Journals (Sweden)

    Carlo Cattani

    2012-01-01

    Full Text Available An explicit analytical formula for the any order fractional derivative of Shannon wavelet is given as wavelet series based on connection coefficients. So that for any 2(ℝ function, reconstructed by Shannon wavelets, we can easily define its fractional derivative. The approximation error is explicitly computed, and the wavelet series is compared with Grünwald fractional derivative by focusing on the many advantages of the wavelet method, in terms of rate of convergence.

  13. Uniform physical theory of diffraction equivalent edge currents for implementation in general computer codes

    DEFF Research Database (Denmark)

    Johansen, Peter Meincke

    1996-01-01

    New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finite...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...

  14. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  15. Code Forking, Governance, and Sustainability in Open Source Software

    OpenAIRE

    Juho Lindman; Linus Nyman

    2013-01-01

    The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibilit...

  16. Content Progressive Coding of Limited Bits/pixel Images

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Forchhammer, Søren

    1999-01-01

    A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF.......A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF....

  17. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  18. Shannon's information is not entropy

    International Nuclear Information System (INIS)

    Schiffer, M.

    1990-01-01

    In this letter we clear up the long-standing misidentification of Shannon's Information with Entropy. We show that Information, in contrast to Entropy, is not invariant under unitary transformations and that these quantities are only equivalent for representations consisting of Hamiltonian eigenstates. We illustrate this fact through a toy system consisting of a harmonic oscillator in a coherent state. It is further proved that the representations which maximize the information are those which are energy-eigenstates. This fact sets the entropy as an upper bound for Shannon's Information. (author)

  19. Analysis of Age and Gender Structures for ICD-10 Diagnoses in Outpatient Treatment Using Shannon's Entropy.

    Science.gov (United States)

    Schuster, Fabian; Ostermann, Thomas; Emcke, Timo; Schuster, Reinhard

    2017-01-01

    Diagnostic diversity has been in the focus of several studies of health services research. As the fraction of people with statutory health insurance changes with age and gender it is assumed that diagnostic diversity may be influenced by these parameters. We analyze fractions of patients in Schleswig-Holstein with respect to the chapters of the ICD-10 code in outpatient treatment for quarter 2/2016 with respect to age and gender/sex of the patient. In a first approach we analyzed which diagnose chapters are most relevant in dependence of age and gender. To detect diagnostic diversity, we finally applied Shannon's entropy measure. Due to multimorbidity we used different standardizations. Shannon entropy strongly increases for women after the age of 15, reaching a limit level at the age of 50 years. Between 15 and 70 years we get higher values for women, after 75 years for men. This article describes a straight forward pragmatic approach to diagnostic diversity using Shannon's Entropy. From a methodological point of view, the use of Shannon's entropy as a measure for diversity should gain more attraction to researchers of health services research.

  20. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  1. Coding of Information in Limit Cycle Oscillators

    Science.gov (United States)

    Schleimer, Jan-Hendrik; Stemmler, Martin

    2009-12-01

    Starting from a general description of noisy limit cycle oscillators, we derive from the Fokker-Planck equations the linear response of the instantaneous oscillator frequency to a time-varying external force. We consider the time series of zero crossings of the oscillator’s phase and compute the mutual information between it and the driving force. A direct link is established between the phase response curve summarizing the oscillator dynamics and the ability of a limit cycle oscillator, such as a heart cell or neuron, to encode information in the timing of peaks in the oscillation.

  2. Evaluation of the uniformity of wide circular reference source and application of correction factors

    International Nuclear Information System (INIS)

    Silva Junior, I.A.; Xavier, M.; Siqueira, P.T.D.; Sordi, G.A.A.; Potiens, M.P.A.

    2017-01-01

    In this work the uniformity of wide circular reference sources is evaluated. This kind of reference source is still widely used in Brazil. In previous works wide rectangular reference sources were analyzed and it was shown the importance of the application of correction factors in calibration procedures of radiation monitors. Now a transposition of the methods used formerly is performed, evaluating the uniformities of circular reference sources and calculating the associated correction factors. (author)

  3. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  4. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    Science.gov (United States)

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  5. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  6. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  7. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  8. Sharp lower bounds on the extractable randomness from non-uniform sources

    NARCIS (Netherlands)

    Skoric, B.; Obi, C.; Verbitskiy, E.A.; Schoenmakers, B.

    2011-01-01

    Extraction of uniform randomness from (noisy) non-uniform sources is an important primitive in many security applications, e.g. (pseudo-)random number generators, privacy-preserving biometrics, and key storage based on Physical Unclonable Functions. Generic extraction methods exist, using universal

  9. Verification test calculations for the Source Term Code Package

    International Nuclear Information System (INIS)

    Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs

  10. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  11. The local limit of the uniform spanning tree on dense graphs

    Czech Academy of Sciences Publication Activity Database

    Hladký, Jan; Nachmias, A.; Tran, Tuan

    First Online: 10 January (2018) ISSN 0022-4715 R&D Projects: GA ČR GJ16-07822Y Keywords : uniform spanning tree * graph limits * Benjamini-Schramm convergence * graphon * branching process Subject RIV: BA - General Mathematics Impact factor: 1.349, year: 2016

  12. Source Code Stylometry Improvements in Python

    Science.gov (United States)

    2017-12-14

    grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even

  13. Uniform lateral etching of tungsten in deep trenches utilizing reaction-limited NF3 plasma process

    Science.gov (United States)

    Kofuji, Naoyuki; Mori, Masahito; Nishida, Toshiaki

    2017-06-01

    The reaction-limited etching of tungsten (W) with NF3 plasma was performed in an attempt to achieve the uniform lateral etching of W in a deep trench, a capability required by manufacturing processes for three-dimensional NAND flash memory. Reaction-limited etching was found to be possible at high pressures without ion irradiation. An almost constant etching rate that showed no dependence on NF3 pressure was obtained. The effect of varying the wafer temperature was also examined. A higher wafer temperature reduced the threshold pressure for reaction-limited etching and also increased the etching rate in the reaction-limited region. Therefore, the control of the wafer temperature is crucial to controlling the etching amount by this method. We found that the uniform lateral etching of W was possible even in a deep trench where the F radical concentration was low.

  14. Shannon entropy and particle decays

    Science.gov (United States)

    Carrasco Millán, Pedro; García-Ferrero, M. Ángeles; Llanes-Estrada, Felipe J.; Porras Riojano, Ana; Sánchez García, Esteban M.

    2018-05-01

    We deploy Shannon's information entropy to the distribution of branching fractions in a particle decay. This serves to quantify how important a given new reported decay channel is, from the point of view of the information that it adds to the already known ones. Because the entropy is additive, one can subdivide the set of channels and discuss, for example, how much information the discovery of a new decay branching would add; or subdivide the decay distribution down to the level of individual quantum states (which can be quickly counted by the phase space). We illustrate the concept with some examples of experimentally known particle decay distributions.

  15. Universality and Shannon entropy of codon usage

    CERN Document Server

    Frappat, L; Sciarrino, A; Sorba, Paul

    2003-01-01

    The distribution functions of the codon usage probabilities, computed over all the available GenBank data, for 40 eukaryotic biological species and 5 chloroplasts, do not follow a Zipf law, but are best fitted by the sum of a constant, an exponential and a linear function in the rank of usage. For mitochondriae the analysis is not conclusive. A quantum-mechanics-inspired model is proposed to describe the observed behaviour. These functions are characterized by parameters that strongly depend on the total GC content of the coding regions of biological species. It is predicted that the codon usage is the same in all exonic genes with the same GC content. The Shannon entropy for codons, also strongly depending on the exonic GC content, is computed.

  16. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  17. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  18. Effluent release limits, sources and control

    International Nuclear Information System (INIS)

    Swindell, G.E.

    1977-01-01

    Objectives of radiation protection in relation to releases. Environmental transfer models for radionuclides. Relationship between releases, environmental levels and doses to persons. Establishment of release limits: Limits based on critical population group concept critical pathway analysis and identification of critical group. Limits based on optimization of radiation protection individual dose limits, collective doses and dose commitments 1) differential cost benefit analysis 2) authorized and operational limits taking account of future exposures. Monitoring of releases to the environment: Objectives of effluent monitoring. Typical sources and composition of effluents; design and operation of monitoring programmes; recording and reporting of monitoring results; complementary environmental monitoring. (orig.) [de

  19. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  20. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  1. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    International Nuclear Information System (INIS)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.; Xu Jin; Connors, Alanna; Freeman, Peter E.; Zezas, Andreas

    2010-01-01

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error), and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper

  2. Intracavitary dosimetry of a high-activity remote loading device with oscillating source

    International Nuclear Information System (INIS)

    Arcovito, G.; Piermattei, A.; D'Abramo, G.; Bassi, F.A.

    1984-01-01

    Dosimetric experiments have been carried out in water around a Fletcher applicator loaded by a Buchler system containing two 137 Cs 148 GBq (4 Ci) sources and one 192 Ir 740 GBq (20 Ci) source. The mechanical system which controls the movement of the 192 Ir source and the resulting motion of the source are described. The dose distribution around the sources was measured photographically and by a PWT Normal 0.22 cm 3 ionisation chamber. The absolute dose rate was measured along the lateral axes of the sources. The measurements of exposure in water near the sources were corrected for the effect due to the finite volume of the chamber. The ''quantisation method'' described by Cassell (1983) was utilised to calculate the variation of the dose rate along the lateral axes of the sources. The dose distribution around both 192 Ir and 137 Cs sources was found to be spherical for angles greater than 40 0 from the longitudinal axes of the sources. A simple algorithm fitting the data for the moving 192 Ir source is proposed. A program written in FORTRAN IV and run on a Univac 1100/80 computer has been used to plot dose distributions on anatomical data obtained from CT images. (author)

  3. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    Science.gov (United States)

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  4. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  5. The Astrophysics Source Code Library by the numbers

    Science.gov (United States)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  6. Uniform Circular Antenna Array Applications in Coded DS-CDMA Mobile Communication Systems

    National Research Council Canada - National Science Library

    Seow, Tian

    2003-01-01

    ...) has greatly increased. This thesis examines the use of an equally spaced circular adaptive antenna array at the mobile station for a typical coded direct sequence code division multiple access (DS-CDMA...

  7. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  8. New reversing freeform lens design method for LED uniform illumination with extended source and near field

    Science.gov (United States)

    Zhao, Zhili; Zhang, Honghai; Zheng, Huai; Liu, Sheng

    2018-03-01

    In light-emitting diode (LED) array illumination (e.g. LED backlighting), obtainment of high uniformity in the harsh condition of the large distance height ratio (DHR), extended source and near field is a key as well as challenging issue. In this study, we present a new reversing freeform lens design algorithm based on the illuminance distribution function (IDF) instead of the traditional light intensity distribution, which allows uniform LED illumination in the above mentioned harsh conditions. IDF of freeform lens can be obtained by the proposed mathematical method, considering the effects of large DHR, extended source and near field target at the same time. In order to prove the claims, a slim direct-lit LED backlighting with DHR equal to 4 is designed. In comparison with the traditional lenses, illuminance uniformity of LED backlighting with the new lens increases significantly from 0.45 to 0.84, and CV(RMSE) decreases dramatically from 0.24 to 0.03 in the harsh condition. Meanwhile, luminance uniformity of LED backlighting with the new lens is obtained as high as 0.92 at the condition of extended source and near field. This new method provides a practical and effective way to solve the problem of large DHR, extended source and near field for LED array illumination.

  9. A scanning point source for quality control of FOV uniformity in GC-PET imaging

    International Nuclear Information System (INIS)

    Bergmann, H.; Minear, G.; Dobrozemsky, G.; Nowotny, R.; Koenig, B.

    2002-01-01

    Aim: PET imaging with coincidence cameras (GC-PET) requires additional quality control procedures to check the function of coincidence circuitry and detector zoning. In particular, the uniformity response over the field of view needs special attention since it is known that coincidence counting mode may suffer from non-uniformity effects not present in single photon mode. Materials and methods: An inexpensive linear scanner with a stepper motor and a digital interface to a PC with software allowing versatile scanning modes was developed. The scanner is used with a source holder containing a Sodium-22 point source. While moving the source along the axis of rotation of the GC-PET system, a tomographic acquisition takes place. The scan covers the full axial field of view of the 2-D or 3-D scatter frame. Depending on the acquisition software, point source scanning takes place continuously while only one projection is acquired or is done in step-and-shoot mode with the number of positions equal to the number of gantry steps. Special software was developed to analyse the resulting list mode acquisition files and to produce an image of the recorded coincidence events of each head. Results: Uniformity images of coincidence events were obtained after further correction for systematic sensitivity variations caused by acquisition geometry. The resulting images are analysed visually and by calculating NEMA uniformity indices as for a planar flood field. The method has been applied successfully to two different brands of GC-PET capable gamma cameras. Conclusion: Uniformity of GC-PET can be tested quickly and accurately with a routine QC procedure, using a Sodium-22 scanning point source and an inexpensive mechanical scanning device. The method can be used for both 2-D and 3-D acquisition modes and fills an important gap in the quality control system for GC-PET

  10. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  11. Monte Carlo simulation of scatter in non-uniform symmetrical attenuating media for point and distributed sources

    International Nuclear Information System (INIS)

    Henry, L.J.; Rosenthal, M.S.

    1992-01-01

    We report results of scatter simulations for both point and distributed sources of 99m Tc in symmetrical non-uniform attenuating media. The simulations utilized Monte Carlo techniques and were tested against experimental phantoms. Both point and ring sources were used inside a 10.5 cm radius acrylic phantom. Attenuating media consisted of combinations of water, ground beef (to simulate muscle mass), air and bone meal (to simulate bone mass). We estimated/measured energy spectra, detector efficiencies and peak height ratios for all cases. In all cases, the simulated spectra agree with the experimentally measured spectra within 2 SD. Detector efficiencies and peak height ratios also are in agreement. The Monte Carlo code is able to properly model the non-uniform attenuating media used in this project. With verification of the simulations, it is possible to perform initial evaluation studies of scatter correction algorithms by evaluating the mechanisms of action of the correction algorithm on the simulated spectra where the magnitude and sources of scatter are known. (author)

  12. Non-uniform dwell times in line source high dose rate brachytherapy: physical and radiobiological considerations

    International Nuclear Information System (INIS)

    Jones, B.; Tan, L.T.; Freestone, G.; Bleasdale, C.; Myint, S.; Littler, J.

    1994-01-01

    The ability to vary source dwell times in high dose rate (HDR) brachytherapy allows for the use of non-uniform dwell times along a line source. This may have advantages in the radical treatment of tumours depending on individual tumour geometry. This study investigates the potential improvements in local tumour control relative to adjacent normal tissue isoeffects when intratumour source dwell times are increased along the central portion of a line source (technique A) in radiotherapy schedules which include a relatively small component of HDR brachytherapy. Such a technique is predicted to increase the local control for tumours of diameters ranging between 2 cm and 4 cm by up to 11% compared with a technique in which there are uniform dwell times along the line source (technique B). There is no difference in the local control rates for the two techniques when used to treat smaller tumours. Normal tissue doses are also modified by the technique used. Technique A produces higher normal tissue doses at points perpendicular to the centre of the line source and lower dose at points nearer the ends of the line source if the prescription point is not in the central plane of the line source. Alternatively, if the dose is prescribed at a point in the central plane of the line source, the dose at all the normal tissue points are lower when technique A is used. (author)

  13. Overcoming limits to near-field radiative heat transfer in uniform planar media through multilayer optimization.

    Science.gov (United States)

    Jin, Weiliang; Messina, Riccardo; Rodriguez, Alejandro W

    2017-06-26

    Radiative heat transfer between uniform plates is bounded by the narrow range and limited contribution of surface waves. Using a combination of analytical calculations and numerical gradient-based optimization, we show that such a limitation can be overcome in complicated multilayer geometries, allowing the scattering and coupling rates of slab resonances to be altered over a broad range of evanescent wavevectors. We conclude that while the radiative flux between two inhomogeneous slabs can only be weakly enhanced, the flux between a dipolar particle and an inhomogeneous slab-proportional to the local density of states-can be orders of magnitude larger, albeit at the expense of increased frequency selectivity. A brief discussion of hyperbolic metamaterials shows that they provide far less enhancement than optimized inhomogeneous slabs.

  14. The development of criteria for limiting the non-stochastic effects of non-uniform skin exposure

    International Nuclear Information System (INIS)

    Charles, M.W.; Wells, J.

    1980-01-01

    The recent recommendations of the International Commission on Radiological Protection (ICRP, 1977) have underlined the lack of knowledge relating to small area skin exposures and have highlighted the difficulties of integrating stochastic and nonstochastic effects into a unified radiation protection philosophy. A system of limitation is suggested which should be appropriate to the wide range of skin irradiation modes which are met in practice. It is proposed for example, that for large area exposures, the probability of skin cancer induction should be considered as the limiting factor. For partial-body skin exposures the probability of the stochastic response will be reduced and late nonstochastic effects will become limiting as the area exposed is reduced. Highly non-uniform exposures such as from small sources or radioactive particulates should be limited on the basis of early rather than late effects. A survey of epidemiological and experimental work is used to show how detailed guidance for limitation in these cases can be provided. Due to the detailed morphology of the skin the biological response depends critically upon the depth dose. In the case of alpha and beta radiation this should be reflected in a less restrictive limitation system, particularly for non-stochastic effects. Up-to-date and on-going experimental studies are described which can provide guidance in this field. (author)

  15. A new mini-extrapolation chamber for beta source uniformity measurements

    International Nuclear Information System (INIS)

    Oliveira, M.L.; Caldas, L.V.E.

    2006-01-01

    According to recent international recommendations, beta particle sources should be specified in terms of absorbed dose rates to water at the reference point. However, because of the clinical use of these sources, additional information should be supplied in the calibration reports. This additional information include the source uniformity. A new small volume extrapolation chamber was designed and constructed at the Calibration Laboratory at Instituto de Pesquisas Energeticas e Nucleares, IPEN, Brazil, for the calibration of 90 Sr+ 90 Y ophthalmic plaques. This chamber can be used as a primary standard for the calibration of this type of source. Recent additional studies showed the feasibility of the utilization of this chamber to perform source uniformity measurements. Because of the small effective electrode area, it is possible to perform independent measurements by varying the chamber position by small steps. The aim of the present work was to study the uniformity of a 90 Sr+ 90 Y plane ophthalmic plaque utilizing the mini extrapolation chamber developed at IPEN. The uniformity measurements were performed by varying the chamber position by steps of 2 mm in the source central axis (x-and y-directions) and by varying the chamber position off-axis by 3 mm steps. The results obtained showed that this small volume chamber can be used for this purpose with a great advantage: it is a direct method, being unnecessary a previously calibration of the measurement device in relation to a reference instrument, and it provides real -time results, reducing the time necessary for the study and the determination of the uncertainties related to the measurements. (authors)

  16. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  17. A method for the preparation of very thin and uniform α-radioactive sources

    International Nuclear Information System (INIS)

    Becerril-Vilchis, A.; Cortes, A.; Dayras, F.; Sanoit, J. de

    1996-01-01

    The method is based on the electrodeposition of α-emitters as hydroxides on stainless steel cathodes rotating at constant angular velocity. A new electrochemical cell, which has been described elsewhere, was designed. This design takes into account the hydrodynamic behaviour of the rotating disc electrode. Electrochemical and physicochemical studies allowed us to predict the best conditions for each α-emitter, in order to obtain very thin and uniform deposits with a minimal current density value. These included determining the dependence of the deposition yield and uniformity on the cathode rotation speed, solution pH, deposition current density and deposition time. Controlling the optimum values of hydrodynamic, electrochemical and physicochemical process conditions then gives reproducible deposition uniformity and yields. The thickness and uniformity of the α-sources were characterised by high resolution alpha spectroscopy with PIPS detectors. These sources are specially suitable for spectroscopic, α-particle emission probability and isotopic ratio studies. Using this method values ≤10 keV for the energy resolution and 100 to 1 for the peak to valley ratio have been obtained. (orig.)

  18. Internal noise sources limiting contrast sensitivity.

    Science.gov (United States)

    Silvestre, Daphné; Arleo, Angelo; Allard, Rémy

    2018-02-07

    Contrast sensitivity varies substantially as a function of spatial frequency and luminance intensity. The variation as a function of luminance intensity is well known and characterized by three laws that can be attributed to the impact of three internal noise sources: early spontaneous neural activity limiting contrast sensitivity at low luminance intensities (i.e. early noise responsible for the linear law), probabilistic photon absorption at intermediate luminance intensities (i.e. photon noise responsible for de Vries-Rose law) and late spontaneous neural activity at high luminance intensities (i.e. late noise responsible for Weber's law). The aim of this study was to characterize how the impact of these three internal noise sources vary with spatial frequency and determine which one is limiting contrast sensitivity as a function of luminance intensity and spatial frequency. To estimate the impact of the different internal noise sources, the current study used an external noise paradigm to factorize contrast sensitivity into equivalent input noise and calculation efficiency over a wide range of luminance intensities and spatial frequencies. The impact of early and late noise was found to drop linearly with spatial frequency, whereas the impact of photon noise rose with spatial frequency due to ocular factors.

  19. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  20. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  1. Dress Codes Blues: An Exploration of Urban Students' Reactions to a Public High School Uniform Policy

    Science.gov (United States)

    DaCosta, Kneia

    2006-01-01

    This qualitative investigation explores the responses of 22 U.S. urban public high school students when confronted with their newly imposed school uniform policy. Specifically, the study assessed students' appraisals of the policy along with compliance and academic performance. Guided by ecological human development perspectives and grounded in…

  2. From Shannon to Quantum Information Science

    Indian Academy of Sciences (India)

    dramatically improve the acquisition, transmission, and processing of .... number of dimensions, and has been applied to several walks of life ... The key idea of Shannon is to model communication as .... Let m be the smallest integer not less.

  3. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  4. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  5. Development of in-vessel source term analysis code, tracer

    International Nuclear Information System (INIS)

    Miyagi, K.; Miyahara, S.

    1996-01-01

    Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)

  6. Consumer-led health-related online sources and their impact on consumers: An integrative review of the literature.

    Science.gov (United States)

    Laukka, Elina; Rantakokko, Piia; Suhonen, Marjo

    2017-04-01

    The aim of the review was to describe consumer-led health-related online sources and their impact on consumers. The review was carried out as an integrative literature review. Quantisation and qualitative content analysis were used as the analysis method. The most common method used by the included studies was qualitative content analysis. This review identified the consumer-led health-related online sources used between 2009 and 2016 as health-related online communities, health-related social networking sites and health-related rating websites. These sources had an impact on peer support; empowerment; health literacy; physical, mental and emotional wellbeing; illness management; and relationships between healthcare organisations and consumers. The knowledge of the existence of the health-related online sources provides healthcare organisations with an opportunity to listen to their consumers' 'voice'. The sources make healthcare consumers more competent actors in relation to healthcare, and the knowledge of them is a valuable resource for healthcare organisations. Additionally, these health-related online sources might create an opportunity to reduce the need for drifting among the healthcare services. Healthcare policymakers and organisations could benefit from having a strategy of increasing their health-related online sources.

  7. Non-uniform dispersion of the source-sink relationship alters wavefront curvature.

    Directory of Open Access Journals (Sweden)

    Lucia Romero

    Full Text Available The distribution of cellular source-sink relationships plays an important role in cardiac propagation. It can lead to conduction slowing and block as well as wave fractionation. It is of great interest to unravel the mechanisms underlying evolution in wavefront geometry. Our goal is to investigate the role of the source-sink relationship on wavefront geometry using computer simulations. We analyzed the role of variability in the microscopic source-sink relationship in driving changes in wavefront geometry. The electrophysiological activity of a homogeneous isotropic tissue was simulated using the ten Tusscher and Panfilov 2006 action potential model and the source-sink relationship was characterized using an improved version of the Romero et al. safety factor formulation (SFm2. Our simulations reveal that non-uniform dispersion of the cellular source-sink relationship (dispersion along the wavefront leads to alterations in curvature. To better understand the role of the source-sink relationship in the process of wave formation, the electrophysiological activity at the initiation of excitation waves in a 1D strand was examined and the source-sink relationship was characterized using the two recently updated safety factor formulations: the SFm2 and the Boyle-Vigmond (SFVB definitions. The electrophysiological activity at the initiation of excitation waves was intimately related to the SFm2 profiles, while the SFVB led to several counterintuitive observations. Importantly, with the SFm2 characterization, a critical source-sink relationship for initiation of excitation waves was identified, which was independent of the size of the electrode of excitation, membrane excitability, or tissue conductivity. In conclusion, our work suggests that non-uniform dispersion of the source-sink relationship alters wavefront curvature and a critical source-sink relationship profile separates wave expansion from collapse. Our study reinforces the idea that the

  8. COLLAPSE AND FRAGMENTATION OF MAGNETIC MOLECULAR CLOUD CORES WITH THE ENZO AMR MHD CODE. I. UNIFORM DENSITY SPHERES

    International Nuclear Information System (INIS)

    Boss, Alan P.; Keiser, Sandra A.

    2013-01-01

    Magnetic fields are important contributors to the dynamics of collapsing molecular cloud cores, and can have a major effect on whether collapse results in a single protostar or fragmentation into a binary or multiple protostar system. New models are presented of the collapse of magnetic cloud cores using the adaptive mesh refinement code Enzo2.0. The code was used to calculate the ideal magnetohydrodynamics (MHD) of initially spherical, uniform density, and rotation clouds with density perturbations, i.e., the Boss and Bodenheimer standard isothermal test case for three-dimensional (3D) hydrodynamics codes. After first verifying that Enzo reproduces the binary fragmentation expected for the non-magnetic test case, a large set of models was computed with varied initial magnetic field strengths and directions with respect to the cloud core axis of rotation (parallel or perpendicular), density perturbation amplitudes, and equations of state. Three significantly different outcomes resulted: (1) contraction without sustained collapse, forming a denser cloud core; (2) collapse to form a single protostar with significant spiral arms; and (3) collapse and fragmentation into binary or multiple protostar systems, with multiple spiral arms. Comparisons are also made with previous MHD calculations of similar clouds with a barotropic equations of state. These results for the collapse of initially uniform density spheres illustrate the central importance of both magnetic field direction and field strength for determining the outcome of dynamic protostellar collapse.

  9. Design and evaluation of an imaging spectrophotometer incorporating a uniform light source.

    Science.gov (United States)

    Noble, S D; Brown, R B; Crowe, T G

    2012-03-01

    Accounting for light that is diffusely scattered from a surface is one of the practical challenges in reflectance measurement. Integrating spheres are commonly used for this purpose in point measurements of reflectance and transmittance. This solution is not directly applicable to a spectral imaging application for which diffuse reflectance measurements are desired. In this paper, an imaging spectrophotometer design is presented that employs a uniform light source to provide diffuse illumination. This creates the inverse measurement geometry to the directional illumination/diffuse reflectance mode typically used for point measurements. The final system had a spectral range between 400 and 1000 nm with a 5.2 nm resolution, a field of view of approximately 0.5 m by 0.5 m, and millimeter spatial resolution. Testing results indicate illumination uniformity typically exceeding 95% and reflectance precision better than 1.7%.

  10. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  11. Bit-Wise Arithmetic Coding For Compression Of Data

    Science.gov (United States)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  12. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  13. Automating RPM Creation from a Source Code Repository

    Science.gov (United States)

    2012-02-01

    apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make

  14. Source Coding in Networks with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2016-01-01

    results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...

  15. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  16. Analysis of Paralleling Limited Capacity Voltage Sources by Projective Geometry Method

    Directory of Open Access Journals (Sweden)

    Alexandr Penin

    2014-01-01

    Full Text Available The droop current-sharing method for voltage sources of a limited capacity is considered. Influence of equalizing resistors and load resistor is investigated on uniform distribution of relative values of currents when the actual loading corresponds to the capacity of a concrete source. Novel concepts for quantitative representation of operating regimes of sources are entered with use of projective geometry method.

  17. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  18. Natural ventilation in an enclosure induced by a heat source distributed uniformly over a vertical wall

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.D.; Li, Y.; Mahoney, J. [CSIRO Building, Construction and Engineering, Advanced Thermo-Fluids Technologies Lab., Highett, VIC (Australia)

    2001-05-01

    A simple multi-layer stratification model is suggested for displacement ventilation in a single-zone building driven by a heat source distributed uniformly over a vertical wall. Theoretical expressions are obtained for the stratification interface height and ventilation flow rate and compared with those obtained by an existing model available in the literature. Experiments were also carried out using a recently developed fine-bubble modelling technique. It was shown that the experimental results obtained using the fine-bubble technique are in good agreement with the theoretical predictions. (Author)

  19. Combined Coding And Modulation Using Runlength Limited Error ...

    African Journals Online (AJOL)

    In this paper we propose a Combined Coding and Modulation (CCM) scheme employing RLL/ECCs and MPSK modulation as well as RLL/ECC codes and BFSK/MPSK modulation with a view to optimise on channel bandwidth. The CCM codes and their trellis are designed and their error performances simulated in AWGN ...

  20. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  1. use of the RESRAD-BUILD code to calculate building surface contamination limits

    International Nuclear Information System (INIS)

    Faillace, E.R.; LePoire, D.; Yu, C.

    1996-01-01

    Surface contamination limits in buildings were calculated for 226 Ra, 230 Th, 232 Th, and natural uranium on the basis of 1 mSv y -1 (100 mrem y -1 ) dose limit. The RESRAD-BUILD computer code was used to calculate these limits for two scenarios: building occupancy and building renovation. RESRAD-BUILD is a pathway analysis model designed to evaluate the potential radiological dose incurred by individuals working or living inside a building contaminated with radioactive material. Six exposure pathways are considered in the RESRAD-BUILD code: (1) external exposure directly from the source; (2) external exposure from materials deposited on the floor; (3) external exposure due to air submersion; (4) inhalation of airborne radioactive particles; (5) inhalation of aerosol indoor radon progeny; and (6) inadvertent ingestion of radioactive material, either directly from the sources or from materials deposited on the surfaces. The code models point, line, area, and volume sources and calculates the effects of radiation shielding, building ventilation, and ingrowth of radioactive decay products. A sensitivity analysis was performed to determine how variations in input parameters would affect the surface contamination limits. In most cases considered, inhalation of airborne radioactive particles was the primary exposure pathway. However, the direct external exposure contribution from surfaces contaminated with 226 Ra was in some cases the dominant pathway for building occupancy depending on the room size, ventilation rates, and surface release fractions. The surface contamination limits are most restrictive for 232 Th, followed by 230 Th, natural uranium, and 226 Ra. The results are compared with the surface contamination limits in the Nuclear Regulatory Commission's Regulatory Guide 1.86, which are most restrictive for 226 Ra and 230 Th, followed by 232 Th, and are least restrictive for natural uranium

  2. Development of uniform hazard response spectra for rock sites considering line and point sources of earthquakes

    International Nuclear Information System (INIS)

    Ghosh, A.K.; Kushwaha, H.S.

    2001-12-01

    Traditionally, the seismic design basis ground motion has been specified by normalised response spectral shapes and peak ground acceleration (PGA). The mean recurrence interval (MRI) used to computed for PGA only. It is shown that the MRI associated with such response spectra are not the same at all frequencies. The present work develops uniform hazard response spectra i.e. spectra having the same MRI at all frequencies for line and point sources of earthquakes by using a large number of strong motion accelerograms recorded on rock sites. Sensitivity of the number of the results to the changes in various parameters has also been presented. This work is an extension of an earlier work for aerial sources of earthquakes. These results will help to determine the seismic hazard at a given site and the associated uncertainities. (author)

  3. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  4. Analysis of the minority actinides transmutation in a sodium fast reactor with uniform load pattern by the MCNPX-CINDER code

    International Nuclear Information System (INIS)

    Ochoa Valero, R.; Garcia-Herranz, N.; Aragones, J. M.

    2010-01-01

    The aim of this study is to evaluate the minority actinides transmutation in sodium fast reactors (SFR) assuming a uniform load pattern. It is determined the isotopic evolution of the actinides along burn, and the evolution of the reactivity and the reactivity coefficients. For that, it is used the MCNPX neutron transport code coupled with the inventory code CINDER90.

  5. The Astrophysics Source Code Library: Supporting software publication and citation

    Science.gov (United States)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  6. Vallor, Shannon. Technology and the Virtues

    DEFF Research Database (Denmark)

    Friis, Jan Kyrre Berg

    2017-01-01

    Technology and the Virtues is the first analysis of emerging technologies and the role of virtue ethics in an attempt to make us understand the urgency of immediate moral transformation. It is written by phenomenologist and philosopher Shannon Vallor, a William J. Rewak Professor at Santa Clara...

  7. 2. From Shannon To Quantum Information Science

    Indian Academy of Sciences (India)

    ... Journals; Resonance – Journal of Science Education; Volume 7; Issue 5. From Shannon to Quantum Information Science - Mixed States. Rajiah Simon. General Article Volume 7 Issue 5 May 2002 pp 16-33 ... Keywords. Mixed states; entanglement witnesses; partial transpose; quantum computers; von Neumann entropy ...

  8. From Shannon to Quantum Information Science

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 2. From Shannon to Quantum Information Science - Ideas and Techniques. Rajiah Simon. General Article Volume 7 Issue 2 February 2002 pp 66-85. Fulltext. Click here to view fulltext PDF. Permanent link:

  9. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  10. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  11. Ambiguity Resolution for Phase-Based 3-D Source Localization under Fixed Uniform Circular Array.

    Science.gov (United States)

    Chen, Xin; Liu, Zhen; Wei, Xizhang

    2017-05-11

    Under fixed uniform circular array (UCA), 3-D parameter estimation of a source whose half-wavelength is smaller than the array aperture would suffer from a serious phase ambiguity problem, which also appears in a recently proposed phase-based algorithm. In this paper, by using the centro-symmetry of UCA with an even number of sensors, the source's angles and range can be decoupled and a novel algorithm named subarray grouping and ambiguity searching (SGAS) is addressed to resolve angle ambiguity. In the SGAS algorithm, each subarray formed by two couples of centro-symmetry sensors can obtain a batch of results under different ambiguities, and by searching the nearest value among subarrays, which is always corresponding to correct ambiguity, rough angle estimation with no ambiguity is realized. Then, the unambiguous angles are employed to resolve phase ambiguity in a phase-based 3-D parameter estimation algorithm, and the source's range, as well as more precise angles, can be achieved. Moreover, to improve the practical performance of SGAS, the optimal structure of subarrays and subarray selection criteria are further investigated. Simulation results demonstrate the satisfying performance of the proposed method in 3-D source localization.

  12. Low-emittance uniform density Cs+ sources for heavy ion fusion accelerators studies

    International Nuclear Information System (INIS)

    Eylon, S.; Henestroza, E.; Garvey, T.; Johnson, R.; Chupp, W.

    1991-04-01

    Low-emittance (high-brightness) Cs + thermionic sources were developed for the heavy ion induction linac experiment MBE-4 at LBL. The MBE-4 linac accelerates four 10 mA beams from 200 ke V to 900 ke V while amplifying the current up to a factor of nine. Recent studies of the transverse beam dynamics suggested that characteristics of the injector geometry were contributing to the normalized transverse emissions growth. Phase-space and current density distribution measurements of the beam extracted from the injector revealed overfocusing of the outermost rays causing a hollow density profile. We shall report on the performance of a 5 mA scraped beam source (which eliminates the outermost beam rays in the diode) and on the design of an improved 10 mA source. The new source is based on EGUN calculations which indicated that a beam with good emissions and uniform current density could be obtained by modifying the cathode Pierce electrodes and using a spherical emitting surface. The measurements of the beam current density profile on a test stand were found to be in agreement with the numerical simulations. 3 refs., 6 figs

  13. Code of practice for the use of sealed radioactive sources in borehole logging (1998)

    International Nuclear Information System (INIS)

    1989-12-01

    The purpose of this code is to establish working practices, procedures and protective measures which will aid in keeping doses, arising from the use of borehole logging equipment containing sealed radioactive sources, to as low as reasonably achievable and to ensure that the dose-equivalent limits specified in the National Health and Medical Research Council s radiation protection standards, are not exceeded. This code applies to all situations and practices where a sealed radioactive source or sources are used through wireline logging for investigating the physical properties of the geological sequence, or any fluids contained in the geological sequence, or the properties of the borehole itself, whether casing, mudcake or borehole fluids. The radiation protection standards specify dose-equivalent limits for two categories: radiation workers and members of the public. 3 refs., tabs., ills

  14. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  15. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, K [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Takashina, M; Koizumi, M [Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Das, I; Moskvin, V [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  16. Health physics source document for codes of practice

    International Nuclear Information System (INIS)

    Pearson, G.W.; Meggitt, G.C.

    1989-05-01

    Personnel preparing codes of practice often require basic Health Physics information or advice relating to radiological protection problems and this document is written primarily to supply such information. Certain technical terms used in the text are explained in the extensive glossary. Due to the pace of change in the field of radiological protection it is difficult to produce an up-to-date document. This document was compiled during 1988 however, and therefore contains the principle changes brought about by the introduction of the Ionising Radiations Regulations (1985). The paper covers the nature of ionising radiation, its biological effects and the principles of control. It is hoped that the document will provide a useful source of information for both codes of practice and wider areas and stimulate readers to study radiological protection issues in greater depth. (author)

  17. Running the source term code package in Elebra MX-850

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-01-01

    The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)

  18. Microdosimetry computation code of internal sources - MICRODOSE 1

    International Nuclear Information System (INIS)

    Li Weibo; Zheng Wenzhong; Ye Changqing

    1995-01-01

    This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs

  19. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  20. Towards Quantifying a Wider Reality: Shannon Exonerata

    Directory of Open Access Journals (Sweden)

    Robert E. Ulanowicz

    2011-10-01

    Full Text Available In 1872 Ludwig von Boltzmann derived a statistical formula to represent the entropy (an apophasis of a highly simplistic system. In 1948 Claude Shannon independently formulated the same expression to capture the positivist essence of information. Such contradictory thrusts engendered decades of ambiguity concerning exactly what is conveyed by the expression. Resolution of widespread confusion is possible by invoking the third law of thermodynamics, which requires that entropy be treated in a relativistic fashion. Doing so parses the Boltzmann expression into separate terms that segregate apophatic entropy from positivist information. Possibly more importantly, the decomposition itself portrays a dialectic-like agonism between constraint and disorder that may provide a more appropriate description of the behavior of living systems than is possible using conventional dynamics. By quantifying the apophatic side of evolution, the Shannon approach to information achieves what no other treatment of the subject affords: It opens the window on a more encompassing perception of reality.

  1. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    International Nuclear Information System (INIS)

    Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.

    2006-01-01

    The measurement of the stationarity of Monte Carlo fission source distributions in k eff calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  2. Constructive quantum Shannon decomposition from Cartan involutions

    International Nuclear Information System (INIS)

    Drury, Byron; Love, Peter

    2008-01-01

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions

  3. Constructive quantum Shannon decomposition from Cartan involutions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, Byron; Love, Peter [Department of Physics, 370 Lancaster Ave., Haverford College, Haverford, PA 19041 (United States)], E-mail: plove@haverford.edu

    2008-10-03

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions.

  4. DYMEL code for prediction of dynamic stability limits in boilers

    International Nuclear Information System (INIS)

    Deam, R.T.

    1980-01-01

    Theoretical and experimental studies of Hydrodynamic Instability in boilers were undertaken to resolve the uncertainties of the existing predictive methods at the time the first Advanced Gas Cooled Reactor (AGR) plant was commissioned. The experiments were conducted on a full scale electrical simulation of an AGR boiler and revealed inadequacies in existing methods. As a result a new computer code called DYMEL was developed based on linearisation and Fourier/Laplace Transformation of the one-dimensional boiler equations in both time and space. Beside giving good agreement with local experimental data, the DYMEL code has since shown agreement with stability data from the plant, sodium heated helical tubes, a gas heated helical tube and an electrically heated U-tube. The code is now used widely within the U.K. (author)

  5. COMPASS: A source term code for investigating capillary barrier performance

    International Nuclear Information System (INIS)

    Zhou, Wei; Apted, J.J.

    1996-01-01

    A computer code COMPASS based on compartment model approach is developed to calculate the near-field source term of the High-Level-Waste repository under unsaturated conditions. COMPASS is applied to evaluate the expected performance of Richard's (capillary) barriers as backfills to divert infiltrating groundwater at Yucca Mountain. Comparing the release rates of four typical nuclides with and without the Richard's barrier, it is shown that the Richard's barrier significantly decreases the peak release rates from the Engineered-Barrier-System (EBS) into the host rock

  6. The non-uniformity correction factor for the cylindrical ionization chambers in dosimetry of an HDR 192Ir brachytherapy source

    International Nuclear Information System (INIS)

    Majumdar, Bishnu; Patel, Narayan Prasad; Vijayan, V.

    2006-01-01

    The aim of this study is to derive the non-uniformity correction factor for the two therapy ionization chambers for the dose measurement near the brachytherapy source. The two ionization chambers of 0.6 cc and 0.1 cc volume were used. The measurement in air was performed for distances between 0.8 cm and 20 cm from the source in specially designed measurement jig. The non-uniformity correction factors were derived from the measured values. The experimentally derived factors were compared with the theoretically calculated non-uniformity correction factors and a close agreement was found between these two studies. The experimentally derived non-uniformity correction factor supports the anisotropic theory. (author)

  7. Limitations in the Traditional Code of Journalistic Responsibility.

    Science.gov (United States)

    Capo, James A.

    Objectivity, truth, freedom, and social responsibility--key principles in contemporary media ethics--fail to provide a practical, coherent code for responsible journalism. During the initial television coverage of Watergate on June 19, 1972, for example, the three television networks all observed these standards in their reporting, yet presented…

  8. 75 FR 31464 - Certification of the Attorney General; Shannon County, SD

    Science.gov (United States)

    2010-06-03

    ... DEPARTMENT OF JUSTICE Certification of the Attorney General; Shannon County, SD In accordance with... within the scope of the determinations of the Attorney General and the Director of the Census made under...., Attorney General of the United States. [FR Doc. 2010-13285 Filed 6-2-10; 8:45 am] BILLING CODE P ...

  9. Phase accuracy evaluation for phase-shifting fringe projection profilometry based on uniform-phase coded image

    Science.gov (United States)

    Zhang, Chunwei; Zhao, Hong; Zhu, Qian; Zhou, Changquan; Qiao, Jiacheng; Zhang, Lu

    2018-06-01

    Phase-shifting fringe projection profilometry (PSFPP) is a three-dimensional (3D) measurement technique widely adopted in industry measurement. It recovers the 3D profile of measured objects with the aid of the fringe phase. The phase accuracy is among the dominant factors that determine the 3D measurement accuracy. Evaluation of the phase accuracy helps refine adjustable measurement parameters, contributes to evaluating the 3D measurement accuracy, and facilitates improvement of the measurement accuracy. Although PSFPP has been deeply researched, an effective, easy-to-use phase accuracy evaluation method remains to be explored. In this paper, methods based on the uniform-phase coded image (UCI) are presented to accomplish phase accuracy evaluation for PSFPP. These methods work on the principle that the phase value of a UCI can be manually set to be any value, and once the phase value of a UCI pixel is the same as that of a pixel of a corresponding sinusoidal fringe pattern, their phase accuracy values are approximate. The proposed methods provide feasible approaches to evaluating the phase accuracy for PSFPP. Furthermore, they can be used to experimentally research the property of the random and gamma phase errors in PSFPP without the aid of a mathematical model to express random phase error or a large-step phase-shifting algorithm. In this paper, some novel and interesting phenomena are experimentally uncovered with the aid of the proposed methods.

  10. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  11. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    Science.gov (United States)

    Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  12. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    Directory of Open Access Journals (Sweden)

    Anne Chao

    Full Text Available Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1 unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2 these measures connect directly to the rich predictive mathematics of information theory; (3 Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4 Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation" between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  13. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  14. Controlling the Shannon Entropy of Quantum Systems

    Science.gov (United States)

    Xing, Yifan; Wu, Jun

    2013-01-01

    This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking. PMID:23818819

  15. Controlling the Shannon Entropy of Quantum Systems

    Directory of Open Access Journals (Sweden)

    Yifan Xing

    2013-01-01

    Full Text Available This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking.

  16. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  17. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  18. The acoustic field of a point source in a uniform boundary layer over an impedance plane

    Science.gov (United States)

    Zorumski, W. E.; Willshire, W. L., Jr.

    1986-01-01

    The acoustic field of a point source in a boundary layer above an impedance plane is investigated anatytically using Obukhov quasi-potential functions, extending the normal-mode theory of Chunchuzov (1984) to account for the effects of finite ground-plane impedance and source height. The solution is found to be asymptotic to the surface-wave term studies by Wenzel (1974) in the limit of vanishing wind speed, suggesting that normal-mode theory can be used to model the effects of an atmospheric boundary layer on infrasonic sound radiation. Model predictions are derived for noise-generation data obtained by Willshire (1985) at the Medicine Bow wind-turbine facility. Long-range downwind propagation is found to behave as a cylindrical wave, with attention proportional to the wind speed, the boundary-layer displacement thickness, the real part of the ground admittance, and the square of the frequency.

  19. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  20. Assessment of 12 CHF prediction methods, for an axially non-uniform heat flux distribution, with the RELAP5 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrouk, M. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria)], E-mail: m_ferrouk@yahoo.fr; Aissani, S. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria); D' Auria, F.; DelNevo, A.; Salah, A. Bousbia [Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione, Universita di Pisa (Italy)

    2008-10-15

    The present article covers the evaluation of the performance of twelve critical heat flux methods/correlations published in the open literature. The study concerns the simulation of an axially non-uniform heat flux distribution with the RELAP5 computer code in a single boiling water reactor channel benchmark problem. The nodalization scheme employed for the considered particular geometry, as modelled in RELAP5 code, is described. For this purpose a review of critical heat flux models/correlations applicable to non-uniform axial heat profile is provided. Simulation results using the RELAP5 code and those obtained from our computer program, based on three type predictions methods such as local conditions, F-factor and boiling length average approaches were compared.

  1. The calculation of wall and non-uniformity correction factors for the BIPM air-kerma standard for 60Co using the Monte Carlo code PENELOPE

    International Nuclear Information System (INIS)

    Burns, D.T.

    2002-01-01

    Traditionally, the correction factor k wall for attenuation and scatter in the walls of cavity ionization chamber primary standards has been evaluated experimentally using an extrapolation method. During the past decade, there have been a number of Monte Carlo calculations of k wall indicating that for certain ionization chamber types the extrapolation method may not be valid. In particular, values for k wall have been proposed that, if adopted by each laboratory concerned, would have a significant effect on the results of international comparisons of air-kerma primary standards. The calculations have also proposed new values for the axial component k an of the point-source uniformity correction. Central to the results of international comparisons is the BIPM air-kerma standard. Unlike most others, the BIPM standard is of the parallel-plate design for which the extrapolation method for evaluating k wall should be valid. The value in use at present is k wall =1.0026 (standard uncertainty 0.0008). Rogers and Treurniet calculated the value k wall =1.0014 for the BIPM standard, which is in moderate agreement with the value in use (no overall uncertainty was given). However, they also calculated k an =1.0024 (statistical uncertainty 0.0003) which is very different from the value k an =0.9964 (0.0007) in use at present for the BIPM standard. A new 60 Co facility has recently been installed at the BIPM and the opportunity was taken to re-evaluate the correction factors for the BIPM standard in this new beam. Given that almost all of the Monte Carlo work to date has used the EGS Monte Carlo code, it was decided to use the code PENELOPE. The new source, container, head and collimating jaws were simulated in detail with more that fifty components being modelled, as shown. This model was used to create a phase-space file in the plane 90 cm from the source. The normalized distribution of photon number with energy is shown, where the various sources of scattered photons are

  2. Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma

    Science.gov (United States)

    Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.

    2017-10-01

    For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that

  3. Skin carcinogenesis following uniform and non-uniform β irradiation

    International Nuclear Information System (INIS)

    Charles, M.W.; Williams, J.P.; Coggle, J.E.

    1989-01-01

    Where workers or the general public may be exposed to ionising radiation, the irradiation is rarely uniform. The risk figures and dose limits recommended by the International Commission on Radiological Protection (ICRP) are based largely on clinical and epidemiological studies of reasonably uniform irradiated organs. The paucity of clinical or experimental data for highly non-uniform exposures has prevented the ICRP from providing adequate recommendations. This weakness has led on a number of occasions to the postulate that highly non-uniform exposures of organs could be 100,000 times more carcinogenic than ICRP risk figures would predict. This so-called ''hot-particle hypothesis'' found little support among reputable radiobiologists, but could not be clearly and definitively refuted on the basis of experiment. An experiment, based on skin tumour induction in mouse skin, is described which was developed to test the hypothesis. The skin of 1200 SAS/4 male mice has been exposed to a range of uniform and non-uniform sources of the β emitter 170 Tm (E max ∼ 1 MeV). Non-uniform exposures were produced using arrays of 32 or 8 2-mm diameter sources distributed over the same 8-cm 2 area as a uniform control source. Average skin doses varied from 2-100 Gy. The results for the non-uniform sources show a 30% reduction in tumour incidence by the 32-point array at the lower mean doses compared with the response from uniform sources. The eight-point array showed an order-of-magnitude reduction in tumour incidence compared to uniform irradiation at low doses. These results, in direct contradiction to the ''hot particle hypothesis'', indicate that non-uniform exposures produce significantly fewer tumours than uniform exposures. (author)

  4. LDPC Codes--Structural Analysis and Decoding Techniques

    Science.gov (United States)

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  5. Disjointness of Stabilizer Codes and Limitations on Fault-Tolerant Logical Gates

    Science.gov (United States)

    Jochym-O'Connor, Tomas; Kubica, Aleksander; Yoder, Theodore J.

    2018-04-01

    Stabilizer codes are among the most successful quantum error-correcting codes, yet they have important limitations on their ability to fault tolerantly compute. Here, we introduce a new quantity, the disjointness of the stabilizer code, which, roughly speaking, is the number of mostly nonoverlapping representations of any given nontrivial logical Pauli operator. The notion of disjointness proves useful in limiting transversal gates on any error-detecting stabilizer code to a finite level of the Clifford hierarchy. For code families, we can similarly restrict logical operators implemented by constant-depth circuits. For instance, we show that it is impossible, with a constant-depth but possibly geometrically nonlocal circuit, to implement a logical non-Clifford gate on the standard two-dimensional surface code.

  6. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  7. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  8. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  9. The semaphore codes attached to a Turing machine via resets and their various limits

    OpenAIRE

    Rhodes, John; Schilling, Anne; Silva, Pedro V.

    2016-01-01

    We introduce semaphore codes associated to a Turing machine via resets. Semaphore codes provide an approximation theory for resets. In this paper we generalize the set-up of our previous paper "Random walks on semaphore codes and delay de Bruijn semigroups" to the infinite case by taking the profinite limit of $k$-resets to obtain $(-\\omega)$-resets. We mention how this opens new avenues to attack the P versus NP problem.

  10. Determination of non-uniformity correction factors for cylindrical ionization chambers close to 192Ir brachytherapy sources

    International Nuclear Information System (INIS)

    Toelli, H.; Bielajew, A. F.; Mattsson, O.; Sernbo, G.

    1995-01-01

    When ionization chambers are used in brachytherapy dosimetry, the measurements must be corrected for the non-uniformity of the incident photon fluence. The theory for determination of non-uniformity correction factors, developed by Kondo and Randolph (Rad. Res. 1960) assumes that the electron fluence within the air cavity is isotropic and does not take into account material differences in the chamber wall. The theory was extended by Bielajew (PMB 1990) using an anisotropic electron angular fluence in the cavity. In contrast to the theory by Kondo and Randolph, the anisotropic theory predicts a wall material dependence in the non-uniformity correction factors. This work presents experimental determination of non-uniformity correction factors at distances between 10 and 140 mm from an Ir-192 source. The experimental work makes use of a PTW23331-chamber and Farmer-type chambers (NE2571 and NE2581) with different materials in the walls. The results of the experiments agree well with the anisotropic theory. Due to the geometrical shape of the NE-type chambers, it is shown that the full length of the these chambers, 24.1mm, is not an appropriate input parameter when theoretical non-uniformity correction factors are evaluated

  11. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    Science.gov (United States)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  12. Fundamental limits of radio interferometers: calibration and source parameter estimation

    OpenAIRE

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J.

    2012-01-01

    We use information theory to derive fundamental limits on the capacity to calibrate next-generation radio interferometers, and measure parameters of point sources for instrument calibration, point source subtraction, and data deconvolution. We demonstrate the implications of these fundamental limits, with particular reference to estimation of the 21cm Epoch of Reionization power spectrum with next-generation low-frequency instruments (e.g., the Murchison Widefield Array -- MWA, Precision Arra...

  13. Comparisons of uniform and discrete source distributions for use in bioassay laboratory performance testing

    International Nuclear Information System (INIS)

    Scherpelz, R.I.; MacLellan, J.A.

    1987-09-01

    The Pacific Northwest Laboratory (PNL) is sending a torso phantom with radioactive material uniformly distributed in the lungs to in vivo bioassay laboratories for analysis. Although the radionuclides ultimately chosen for the studies had relatively long half-lives, future accreditation testing will require repeated tests with short half-life test nuclides. Computer modeling was used to simulate the major components of the phantom. Radiation transport calculations were then performed using the computer models to calculate dose rates either 15 cm from the chest or at its surface. For 144 Ce and 60 Co, three configurations were used for the lung comparison tests. Calculations show that, for most detector positions, a single plug containing 40 K located in the back of the heart provides a good approximation to a uniform distribution of 40 K. The approximation would lead, however, to a positive bias for the detector reading if the detector were located at the chest surface near the center. Loading the 40 K in a uniform layer inside the chest wall is not a good approximation of the uniform distribution in the lungs, because most of the radionuclides would be situated close to the detector location and the only shielding would be the thickness of the chest wall. The calculated dose rates for 60 Co and 144 Ce were similar at all calculated reference points. 3 refs., 5 figs., 10 tabs

  14. OSSMETER D3.4 – Language-Specific Source Code Quality Analysis

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim); H.J.S. Basten (Bas)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and

  15. Determination of the NPP Krsko reactor core safety limits using the COBRA-III-C code

    International Nuclear Information System (INIS)

    Lajtman, S.; Feretic, D.; Debrecin, N.

    1989-01-01

    This paper presents the NPP Krsko reactor core safety limits determined by the COBRA-III-C code, along with the methodology used. The reactor core safety limits determination is a part of reactor protection limits procedure. The results obtained were compared to safety limits presented in NPP Krsko FSAR. The COBRA-III-C NPP Krsko design core steady state thermal hydraulics calculation, used as the basis for the safety limits calculation, is presented as well. (author)

  16. The discrete-dipole-approximation code ADDA: Capabilities and known limitations

    International Nuclear Information System (INIS)

    Yurkin, Maxim A.; Hoekstra, Alfons G.

    2011-01-01

    The open-source code ADDA is described, which implements the discrete dipole approximation (DDA), a method to simulate light scattering by finite 3D objects of arbitrary shape and composition. Besides standard sequential execution, ADDA can run on a multiprocessor distributed-memory system, parallelizing a single DDA calculation. Hence the size parameter of the scatterer is in principle limited only by total available memory and computational speed. ADDA is written in C99 and is highly portable. It provides full control over the scattering geometry (particle morphology and orientation, and incident beam) and allows one to calculate a wide variety of integral and angle-resolved scattering quantities (cross sections, the Mueller matrix, etc.). Moreover, ADDA incorporates a range of state-of-the-art DDA improvements, aimed at increasing the accuracy and computational speed of the method. We discuss both physical and computational aspects of the DDA simulations and provide a practical introduction into performing such simulations with the ADDA code. We also present several simulation results, in particular, for a sphere with size parameter 320 (100-wavelength diameter) and refractive index 1.05.

  17. SOURCES-3A: A code for calculating (α, n), spontaneous fission, and delayed neutron sources and spectra

    International Nuclear Information System (INIS)

    Perry, R.T.; Wilson, W.B.; Charlton, W.S.

    1998-04-01

    In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an

  18. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    Science.gov (United States)

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  19. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  20. GRHydro: a new open-source general-relativistic magnetohydrodynamics code for the Einstein toolkit

    International Nuclear Information System (INIS)

    Mösta, Philipp; Haas, Roland; Ott, Christian D; Reisswig, Christian; Mundim, Bruno C; Faber, Joshua A; Noble, Scott C; Bode, Tanja; Löffler, Frank; Schnetter, Erik

    2014-01-01

    We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the toolkit builds upon previous releases and implements the evolution of relativistic magnetized fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both constrained transport and hyperbolic divergence cleaning schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfvén waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code’s performance in curved spacetimes with spherical accretion onto a black hole on a fixed background spacetime and in fully dynamical spacetimes by evolutions of a magnetized polytropic neutron star and of the collapse of a magnetized stellar core. Our results agree well with exact solutions where these are available and we demonstrate convergence. All code and input files used to generate the results are available on http://einsteintoolkit.org. This makes our work fully reproducible and provides new users with an introduction to applications of the code. (paper)

  1. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  2. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  3. Stars with shell energy sources. Part 1. Special evolutionary code

    International Nuclear Information System (INIS)

    Rozyczka, M.

    1977-01-01

    A new version of the Henyey-type stellar evolution code is described and tested. It is shown, as a by-product of the tests, that the thermal time scale of the core of a red giant approaching the helium flash is of the order of the evolutionary time scale. The code itself appears to be a very efficient tool for investigations of the helium flash, carbon flash and the evolution of a white dwarf accreting mass. (author)

  4. A modified KdV equation with self-consistent sources in non-uniform media and soliton dynamics

    International Nuclear Information System (INIS)

    Zhang Dajun; Bi Jinbo; Hao Honghai

    2006-01-01

    Two non-isospectral modified KdV equations with self-consistent sources are derived, which correspond to the time-dependent spectral parameter λ satisfying λ t = λ and λ t = λ 3 , respectively. Gauge transformation between the first non-isospectral equation (corresponding to λ t = λ) and its isospectral counterpart is given, from which exact solutions and conservation laws for the non-isospectral one are easily listed. Besides, solutions to the two non-isospectral modified KdV equations with self-consistent sources are derived by means of the Hirota method and the Wronskian technique, respectively. Non-isospectral dynamics and source effects, including one-soliton characteristics in non-uniform media, two-solitons scattering and special behaviours related to sources (for example, the 'ghost' solitons in the degenerate two-soliton case), are investigated analytically

  5. Hiding the Source Based on Limited Flooding for Sensor Networks.

    Science.gov (United States)

    Chen, Juan; Lin, Zhengkui; Hu, Ying; Wang, Bailing

    2015-11-17

    Wireless sensor networks are widely used to monitor valuable objects such as rare animals or armies. Once an object is detected, the source, i.e., the sensor nearest to the object, generates and periodically sends a packet about the object to the base station. Since attackers can capture the object by localizing the source, many protocols have been proposed to protect source location. Instead of transmitting the packet to the base station directly, typical source location protection protocols first transmit packets randomly for a few hops to a phantom location, and then forward the packets to the base station. The problem with these protocols is that the generated phantom locations are usually not only near the true source but also close to each other. As a result, attackers can easily trace a route back to the source from the phantom locations. To address the above problem, we propose a new protocol for source location protection based on limited flooding, named SLP. Compared with existing protocols, SLP can generate phantom locations that are not only far away from the source, but also widely distributed. It improves source location security significantly with low communication cost. We further propose a protocol, namely SLP-E, to protect source location against more powerful attackers with wider fields of vision. The performance of our SLP and SLP-E are validated by both theoretical analysis and simulation results.

  6. Hiding the Source Based on Limited Flooding for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Juan Chen

    2015-11-01

    Full Text Available Wireless sensor networks are widely used to monitor valuable objects such as rare animals or armies. Once an object is detected, the source, i.e., the sensor nearest to the object, generates and periodically sends a packet about the object to the base station. Since attackers can capture the object by localizing the source, many protocols have been proposed to protect source location. Instead of transmitting the packet to the base station directly, typical source location protection protocols first transmit packets randomly for a few hops to a phantom location, and then forward the packets to the base station. The problem with these protocols is that the generated phantom locations are usually not only near the true source but also close to each other. As a result, attackers can easily trace a route back to the source from the phantom locations. To address the above problem, we propose a new protocol for source location protection based on limited flooding, named SLP. Compared with existing protocols, SLP can generate phantom locations that are not only far away from the source, but also widely distributed. It improves source location security significantly with low communication cost. We further propose a protocol, namely SLP-E, to protect source location against more powerful attackers with wider fields of vision. The performance of our SLP and SLP-E are validated by both theoretical analysis and simulation results.

  7. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  8. OSSMETER D3.2 – Report on Source Code Activity Metrics

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been

  9. Shannon versus Kullback-Leibler entropies in nonequilibrium random motion

    International Nuclear Information System (INIS)

    Garbaczewski, Piotr

    2005-01-01

    We analyze dynamical properties of the Shannon information entropy of a continuous probability distribution, which is driven by a standard diffusion process. This entropy choice is confronted with another option, employing the conditional Kullback-Leibler entropy. Both entropies discriminate among various probability distributions, either statically or in the time domain. An asymptotic approach towards equilibrium is typically monotonic in terms of the Kullback entropy. The Shannon entropy time rate needs not to be positive and is a sensitive indicator of the power transfer processes (removal/supply) due to an active environment. In the case of Smoluchowski diffusions, the Kullback entropy time rate coincides with the Shannon entropy 'production' rate

  10. Do School Uniforms Fit?

    Science.gov (United States)

    White, Kerry A.

    2000-01-01

    In 1994, Long Beach (California) Unified School District began requiring uniforms in all elementary and middle schools. Now, half of all urban school systems and many suburban schools have uniform policies. Research on uniforms' effectiveness is mixed. Tightened dress codes may be just as effective and less litigious. (MLH)

  11. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    OpenAIRE

    Sandeep Kakde; Atish Khobragade; Shrikant Ambatkar; Pranay Nandanwar

    2017-01-01

    For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algori...

  12. Open Genetic Code: on open source in the life sciences

    OpenAIRE

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first ...

  13. Fundamental limits on beam stability at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Decker, G. A.

    1998-01-01

    Orbit correction is now routinely performed at the few-micron level in the Advanced Photon Source (APS) storage ring. Three diagnostics are presently in use to measure and control both AC and DC orbit motions: broad-band turn-by-turn rf beam position monitors (BPMs), narrow-band switched heterodyne receivers, and photoemission-style x-ray beam position monitors. Each type of diagnostic has its own set of systematic error effects that place limits on the ultimate pointing stability of x-ray beams supplied to users at the APS. Limiting sources of beam motion at present are magnet power supply noise, girder vibration, and thermal timescale vacuum chamber and girder motion. This paper will investigate the present limitations on orbit correction, and will delve into the upgrades necessary to achieve true sub-micron beam stability

  14. Multi-photon absorption limits to heralded single photon sources

    Science.gov (United States)

    Husko, Chad A.; Clark, Alex S.; Collins, Matthew J.; De Rossi, Alfredo; Combrié, Sylvain; Lehoucq, Gaëlle; Rey, Isabella H.; Krauss, Thomas F.; Xiong, Chunle; Eggleton, Benjamin J.

    2013-01-01

    Single photons are of paramount importance to future quantum technologies, including quantum communication and computation. Nonlinear photonic devices using parametric processes offer a straightforward route to generating photons, however additional nonlinear processes may come into play and interfere with these sources. Here we analyse spontaneous four-wave mixing (SFWM) sources in the presence of multi-photon processes. We conduct experiments in silicon and gallium indium phosphide photonic crystal waveguides which display inherently different nonlinear absorption processes, namely two-photon (TPA) and three-photon absorption (ThPA), respectively. We develop a novel model capturing these diverse effects which is in excellent quantitative agreement with measurements of brightness, coincidence-to-accidental ratio (CAR) and second-order correlation function g(2)(0), showing that TPA imposes an intrinsic limit on heralded single photon sources. We build on these observations to devise a new metric, the quantum utility (QMU), enabling further optimisation of single photon sources. PMID:24186400

  15. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    Science.gov (United States)

    2010-12-01

    technical competence for the type of tests and calibrations SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 ...and exec t [ ISO / IEC 2005]. f a software system indicates that the SCALe analysis di by a CERT secure coding standard. Successful conforma antees that...to be more secure than non- systems. However, no study has yet been performed to p t ssment in accordance with ISO / IEC 17000: “a demonstr g to a

  16. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  17. Open Genetic Code : On open source in the life sciences

    NARCIS (Netherlands)

    Deibel, E.

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life

  18. The discrete-dipole-approximation code ADDA: capabilities and known limitations

    NARCIS (Netherlands)

    Yurkin, M.A.; Hoekstra, A.G.

    2011-01-01

    The open-source code ADDA is described, which implements the discrete dipole approximation (DDA), a method to simulate light scattering by finite 3D objects of arbitrary shape and composition. Besides standard sequential execution, ADDA can run on a multiprocessor distributed-memory system,

  19. Interpretation of UV radiometric measurements of spectrally non-uniform sources

    International Nuclear Information System (INIS)

    Murphy, P.J.; Gardner, D.G.

    1988-01-01

    Narrow bandpass UV radiometers are used in a variety of high-temperature measurement applications. Significant systematic errors, in the form of an apparent wavelength shift in the system response curve, may be introduced when interpreting data obtained from spectrally nonuniform sources. Theoretical calculations, using transmission curves from commercially available narrow bandpass filters, show that the apparent shift in the system spectral response is a function of temperature for a blackbody source. A brief comparison between the theoretical analysis and experimentaal data is presented

  20. Open Genetic Code: on open source in the life sciences.

    Science.gov (United States)

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  1. Monte Carlo modelling of impurity ion transport for a limiter source/sink

    International Nuclear Information System (INIS)

    Stangeby, P.C.; Farrell, C.; Hoskins, S.; Wood, L.

    1988-01-01

    In relating the impurity influx Φ I (0) (atoms per second) into a plasma from the edge to the central impurity ion density n I (0) (ions·m -3 ), it is necessary to know the value of τ I SOL , the average dwell time of impurity ions in the scrape-off layer. It is usually assumed that τ I SOL =L c /c s , the hydrogenic dwell time, where L c is the limiter connection length and c s is the hydrogenic ion acoustic speed. Monte Carlo ion transport results are reported here which show that, for a wall (uniform) influx, τ I SOL is longer than L c /c s , while for a limiter influx it is shorter. Thus for a limiter influx n I (0) is predicted to be smaller than the reference value. Impurities released from the limiter form ever large 'clouds' of successively higher ionization stages. These are reproduced by the Monte Carlo code as are the cloud shapes for a localized impurity injection far from the limiter. (author). 23 refs, 18 figs, 6 tabs

  2. Free convection flow of some fractional nanofluids over a moving vertical plate with uniform heat flux and heat source

    Science.gov (United States)

    Azhar, Waqas Ali; Vieru, Dumitru; Fetecau, Constantin

    2017-08-01

    Free convection flow of some water based fractional nanofluids over a moving infinite vertical plate with uniform heat flux and heat source is analytically and graphically studied. Exact solutions for dimensionless temperature and velocity fields, Nusselt numbers, and skin friction coefficients are established in integral form in terms of modified Bessel functions of the first kind. These solutions satisfy all imposed initial and boundary conditions and reduce to the similar solutions for ordinary nanofluids when the fractional parameters tend to one. Furthermore, they reduce to the known solutions from the literature when the plate is fixed and the heat source is absent. The influence of fractional parameters on heat transfer and fluid motion is graphically underlined and discussed. The enhancement of heat transfer in such flows is higher for fractional nanofluids in comparison with ordinary nanofluids. Moreover, the use of fractional models allows us to choose the fractional parameters in order to get a very good agreement between experimental and theoretical results.

  3. Study of MHD stability beta limit in LHD by hierarchy integrated simulation code

    International Nuclear Information System (INIS)

    Sato, M.; Watanabe, K.Y.; Nakamura, Y.

    2008-10-01

    The beta limit by the ideal MHD instabilities (so-called 'MHD stability beta limit') for helical plasmas is studied by a hierarchy integrated simulation code. A numerical model for the effect of the MHD instabilities is introduced such that the pressure profile is flattened around the rational surface due to the MHD instabilities. The width of the flattening of the pressure gradient is determined from the width of the eigenmode structure of the MHD instabilities. It is assumed that there is the upper limit of the mode number of the MHD instabilities which directly affect the pressure gradient. The upper limit of the mode number is determined using a recent high beta experiment in the Large Helical Device (LHD). The flattening of the pressure gradient is calculated by the transport module in a hierarchy integrated code. The achievable volume averaged beta value in the LHD is expected to be beyond 6%. (author)

  4. Ambiguity resolving based on cosine property of phase differences for 3D source localization with uniform circular array

    Science.gov (United States)

    Chen, Xin; Wang, Shuhong; Liu, Zhen; Wei, Xizhang

    2017-07-01

    Localization of a source whose half-wavelength is smaller than the array aperture would suffer from serious phase ambiguity problem, which also appears in recently proposed phase-based algorithms. In this paper, by using the centro-symmetry of fixed uniform circular array (UCA) with even number of sensors, the source's angles and range can be decoupled and a novel ambiguity resolving approach is addressed for phase-based algorithms of source's 3-D localization (azimuth angle, elevation angle, and range). In the proposed method, by using the cosine property of unambiguous phase differences, ambiguity searching and actual-value matching are first employed to obtain actual phase differences and corresponding source's angles. Then, the unambiguous angles are utilized to estimate the source's range based on a one dimension multiple signal classification (1-D MUSIC) estimator. Finally, simulation experiments investigate the influence of step size in search and SNR on performance of ambiguity resolution and demonstrate the satisfactory estimation performance of the proposed method.

  5. Limitations of absolute activity determination of I-125 sources

    Energy Technology Data Exchange (ETDEWEB)

    Pelled, O; German, U; Kol, R; Levinson, S; Weinstein, M; Laichter, Y [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev; Alphasy, Z [Ben-Gurion Univ. of the Negev, Beersheba (Israel)

    1996-12-01

    A method for absolute determination of the activity of a I-125 source, based on the counting rate values of the 27 keV photons and the coincidence photon peak is given in the literature. It is based on the principle that if a radionuclide emits two photons in coincidence , a measurement of its disintegration rate in the photopeak and in the sum- peak can determinate it`s absolute activity. When using this method , the system calibration is simplified and parameters such as source geometry or source position relative to the detector have no significant influence. However, when the coincidence rate is very low, the application of this method is limited because of the statistics of the coincidence peak (authors).

  6. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  7. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  8. Development of Malaysian women fertility index: Evidence from Shannon's entropy

    Science.gov (United States)

    Jalil, Wan Aznie Fatihah Wan Abd; Sharif, Shamshuritawati

    2017-11-01

    A fertility rate is a measure of the average number of children a woman will have during her childbearing years. Malaysia is now facing a population crisis and the fertility rate continues to decline. This situation will have implications for the age structure of the population where percentages of senior citizens are higher than percentages of people aged below 5 years old. Malaysia is expected to reach aging population status by the year 2035. As the aging population has a very long average life expectancy, the government needs to spend a lot on medical costs for senior citizens and need to increase budgets for pensions. The government may be required to increase tax revenues to support the growing older population. The falling fertility rate requires proper control by relevant authorities, especially through planning and implementation of strategic and effective measures. Hence, this paper aims to develop a fertility index using Shannon's entropy method. The results show that Selangor, Johor, and Sarawak are among the states with the highest values of the fertility index. On the other end of the spectrum, Terengganu, W.P. Labuan, and Perlis are ranked in the last positions according to the fertility index. The information generated from the results in this study can be used as a primary source for the government to design appropriate policies to mitigate dwindling fertility rates among Malaysian women.

  9. Temperature distribution of a simplified rotor due to a uniform heat source

    Science.gov (United States)

    Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver

    2018-03-01

    In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.

  10. Building guide : how to build Xyce from source code.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric Richard; Russo, Thomas V.; Schiek, Richard Louis; Sholander, Peter E.; Thornquist, Heidi K.; Mei, Ting; Verley, Jason C.

    2013-08-01

    While Xyce uses the Autoconf and Automake system to configure builds, it is often necessary to perform more than the customary %E2%80%9C./configure%E2%80%9D builds many open source users have come to expect. This document describes the steps needed to get Xyce built on a number of common platforms.

  11. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan

    2014-01-01

    We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...

  12. Strengths and limitations of the NATALI code for aerosol typing from multiwavelength Raman lidar observations

    Directory of Open Access Journals (Sweden)

    Nicolae Doina

    2018-01-01

    Full Text Available A Python code was developed to automatically retrieve the aerosol type (and its predominant component in the mixture from EARLINET’s 3 backscatter and 2 extinction data. The typing relies on Artificial Neural Networks which are trained to identify the most probable aerosol type from a set of mean-layer intensive optical parameters. This paper presents the use and limitations of the code with respect to the quality of the inputed lidar profiles, as well as with the assumptions made in the aerosol model.

  13. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost.

  14. Automated Source Code Analysis to Identify and Remove Software Security Vulnerabilities: Case Studies on Java Programs

    OpenAIRE

    Natarajan Meghanathan

    2013-01-01

    The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and ...

  15. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost

  16. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  17. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  18. Test of Effective Solid Angle code for the efficiency calculation of volume source

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.

  19. Unsteady MHD flow of a dusty nanofluid past a vertical stretching surface with non-uniform heat source/sink

    Directory of Open Access Journals (Sweden)

    C. Sulochana

    2016-02-01

    Full Text Available We analyzed the momentum and heat transfer characteristics of unsteady MHD flow of a dusty nanofluid over a vertical stretching surface in presence of volume fraction of dust and nano particles with non uniform heat source/sink. We considered two types of nanofluids namely Ag-water and Cu-water embedded with conducting dust particles. The governing equations are transformed in to nonlinear ordinary differential equations by using similarity transformation and solved numerically using Shooting technique. The effects of non-dimensional governing parameters on velocity and temperature profiles for fluid and dust phases are discussed and presented through graphs. Also, the skin friction coefficient and Nusselt number are discussed and presented for two dusty nanofluids separately in tabular form. Results indicate that an increase in the volume fraction of dust particles enhances the heat transfer in Cu-water nanofluid compared with Ag-water nanofluid and a raise in the volume fraction of nano particles shows uniform heat transfer in both Cu-water and Ag-water nanofluids.

  20. How phosphorus limitation can control climatic gas sources and sinks

    Science.gov (United States)

    Gypens, Nathalie; Borges, Alberto V.; Ghyoot, Caroline

    2017-04-01

    Since the 1950's, anthropogenic activities severely increased river nutrient loads in European coastal areas. Subsequent implementation of nutrient reduction policies have considerably reduced phosphorus (P) loads from mid-1980's, while nitrogen (N) loads were maintained, inducing a P limitation of phytoplankton growth in many eutrophied coastal areas such as the Southern Bight of the North Sea (SBNS). When dissolved inorganic phosphorous (DIP) is limiting, most phytoplankton organisms are able to indirectly acquire P from dissolved organic P (DOP). We investigate the impact of DOP use on the importance of phytoplankton production and atmospheric fluxes of CO2 and dimethylsulfide (DMS) in the SBNS from 1951 to 2007 using an extended version of the R-MIRO-BIOGAS model. This model includes a description of the ability of phytoplankton organisms to use DOP as a source of P. Results show that primary production can increase up to 70% due to DOP uptake in limiting DIP conditions. Consequently, simulated DMS emissions double while CO2 emissions to the atmosphere decrease, relative to the reference simulation without DOP uptake. At the end of the simulated period (late 2000's), the net direction of air-sea CO2 annual flux, changed from a source to a sink for atmospheric CO2 in response to use of DOP and increase of primary production.

  1. The implementation of a toroidal limiter model into the gyrokinetic code ELMFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Leerink, S.; Janhunen, S.J.; Kiviniemi, T.P.; Nora, M. [Euratom-Tekes Association, Helsinki University of Technology (Finland); Heikkinen, J.A. [Euratom-Tekes Association, VTT, P.O. Box 1000, FI-02044 VTT (Finland); Ogando, F. [Universidad Nacional de Educacion a Distancia, Madrid (Spain)

    2008-03-15

    The ELMFIRE full nonlinear gyrokinetic simulation code has been developed for calculations of plasma evolution and dynamics of turbulence in tokamak geometry. The code is applicable for calculations of strong perturbations in particle distribution function, rapid transients and steep gradients in plasma. Benchmarking against experimental reflectometry data from the FT2 tokamak is being discussed and in this paper a model for comparison and studying poloidal velocity is presented. To make the ELMFIRE code suitable for scrape-off layer simulations a simplified toroidal limiter model has been implemented. The model is be discussed and first results are presented. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  2. Use of source term code package in the ELEBRA MX-850 system

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-12-01

    The implantation of source term code package in the ELEBRA-MX850 system is presented. The source term is formed when radioactive materials generated in nuclear fuel leakage toward containment and the external environment to reactor containment. The implantated version in the ELEBRA system are composed of five codes: MARCH 3, TRAPMELT 3, THCCA, VANESA and NAVA. The original example case was used. The example consists of a small loca accident in a PWR type reactor. A sensitivity study for the TRAPMELT 3 code was carried out, modifying the 'TIME STEP' to estimate the processing time of CPU for executing the original example case. (M.C.K.) [pt

  3. Eu-NORSEWInD - Assessment of Viability of Open Source CFD Code for the Wind Industry

    DEFF Research Database (Denmark)

    Stickland, Matt; Scanlon, Tom; Fabre, Sylvie

    2009-01-01

    Part of the overall NORSEWInD project is the use of LiDAR remote sensing (RS) systems mounted on offshore platforms to measure wind velocity profiles at a number of locations offshore. The data acquired from the offshore RS measurements will be fed into a large and novel wind speed dataset suitab...... between the results of simulations created by the commercial code FLUENT and the open source code OpenFOAM. An assessment of the ease with which the open source code can be used is also included....

  4. Controlled Synthesis of Uniform Cobalt Phosphide Hyperbranched Nanocrystals Using Tri- n -octylphosphine Oxide as a Phosphorus Source

    KAUST Repository

    Zhang, Haitao; Ha, Don-Hyung; Hovden, Robert; Kourkoutis, Lena Fitting; Robinson, Richard D.

    2011-01-01

    A new method to produce hyperbranched Co 2P nanocrystals that are uniform in size, shape, and symmetry was developed. In this reaction tri-n-octylphosphine oxide (TOPO) was used as both a solvent and a phosphorus source. The reaction exhibits a novel monomer-saturation-dependent tunability between Co metal nanoparticle (NP) and Co 2P NP products. The morphology of Co 2P can be controlled from sheaflike structures to hexagonal symmetric structures by varying the concentration of the surfactant. This unique product differs significantly from other reported hyperbranched nanocrystals in that the highly anisotropic shapes can be stabilized as the majority shape (>84%). This is the first known use of TOPO as a reagent as well as a coordinating background solvent in NP synthesis. © 2011 American Chemical Society.

  5. Controlled Synthesis of Uniform Cobalt Phosphide Hyperbranched Nanocrystals Using Tri- n -octylphosphine Oxide as a Phosphorus Source

    KAUST Repository

    Zhang, Haitao

    2011-01-12

    A new method to produce hyperbranched Co 2P nanocrystals that are uniform in size, shape, and symmetry was developed. In this reaction tri-n-octylphosphine oxide (TOPO) was used as both a solvent and a phosphorus source. The reaction exhibits a novel monomer-saturation-dependent tunability between Co metal nanoparticle (NP) and Co 2P NP products. The morphology of Co 2P can be controlled from sheaflike structures to hexagonal symmetric structures by varying the concentration of the surfactant. This unique product differs significantly from other reported hyperbranched nanocrystals in that the highly anisotropic shapes can be stabilized as the majority shape (>84%). This is the first known use of TOPO as a reagent as well as a coordinating background solvent in NP synthesis. © 2011 American Chemical Society.

  6. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  7. Analytic and Unambiguous Phase-Based Algorithm for 3-D Localization of a Single Source with Uniform Circular Array

    Directory of Open Access Journals (Sweden)

    Le Zuo

    2018-02-01

    Full Text Available This paper presents an analytic algorithm for estimating three-dimensional (3-D localization of a single source with uniform circular array (UCA interferometers. Fourier transforms are exploited to expand the phase distribution of a single source and the localization problem is reformulated as an equivalent spectrum manipulation problem. The 3-D parameters are decoupled to different spectrums in the Fourier domain. Algebraic relations are established between the 3-D localization parameters and the Fourier spectrums. Fourier sampling theorem ensures that the minimum element number for 3-D localization of a single source with a UCA is five. Accuracy analysis provides mathematical insights into the 3-D localization algorithm that larger number of elements gives higher estimation accuracy. In addition, the phase-based high-order difference invariance (HODI property of a UCA is found and exploited to realize phase range compression. Following phase range compression, ambiguity resolution is addressed by the HODI of a UCA. A major advantage of the algorithm is that the ambiguity resolution and 3-D localization estimation are both analytic and are processed simultaneously, hence computationally efficient. Numerical simulations and experimental results are provided to verify the effectiveness of the proposed 3-D localization algorithm.

  8. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories

  9. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories.

  10. Lysimeter data as input to performance assessment source term codes

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.; Rogers, R.D.; Sullivan, T.

    1992-01-01

    The Field Lysimeter Investigation: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-II c prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. In this paper, radionuclide releases from waste forms in the first seven years of sampling are presented and discussed. Application of lysimeter data to be used in performance assessment source term models is presented. Initial results from use of data in two models are discussed

  11. SCATTER: Source and Transport of Emplaced Radionuclides: Code documentation

    International Nuclear Information System (INIS)

    Longsine, D.E.

    1987-03-01

    SCATTER simulated several processes leading to the release of radionuclides to the site subsystem and then simulates transport via the groundwater of the released radionuclides to the biosphere. The processes accounted for to quantify release rates to a ground-water migration path include radioactive decay and production, leaching, solubilities, and the mixing of particles with incoming uncontaminated fluid. Several decay chains of arbitrary length can be considered simultaneously. The release rates then serve as source rates to a numerical technique which solves convective-dispersive transport for each decay chain. The decay chains are allowed to have branches and each member can have a different radioactive factor. Results are cast as radionuclide discharge rates to the accessible environment

  12. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  13. Performance Estimation for Lowpass Ternary Filters

    Directory of Open Access Journals (Sweden)

    Brenton Steele

    2003-11-01

    Full Text Available Ternary filters have tap values limited to −1, 0, or +1. This restriction in tap values greatly simplifies the multipliers required by the filter, making ternary filters very well suited to hardware implementations. Because they incorporate coarse quantisation, their performance is typically limited by tap quantisation error. This paper derives formulae for estimating the achievable performance of lowpass ternary filters, thereby allowing the number of computationally intensive design iterations to be reduced. Motivated by practical communications systems requirements, the performance measure which is used is the worst-case stopband attenuation.

  14. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  15. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  16. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  17. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  18. Limits and Prospects of Renewable Energy Sources in Italy

    International Nuclear Information System (INIS)

    Coiante, D.

    2008-01-01

    The Italian energy balance for year 2005 is discussed with particular attention on renewable energy production. The potentials of renewable sources are evaluated in terms of energy density that can be obtained from occupied plant area. About 20000 km 2 of sunny barren lands are present in South of Italy, particularly suitable for photovoltaic plants and that corresponds to a potential production of 144 Mtep of primary energy. Therefore, in theory, the photovoltaic energy potential is comparable with energy balance. The grid connection limit due to intermittent power generation of photovoltaic and wind energy systems is considered in relation with the stability of grid power level. Assuming a 25% maximum grid penetration of intermittent power with respect to capacity of active thermoelectric generators, the renewable energy contribution amounts to about 2% of annual energy balance. In front of expectations for a larger contribution, the practical result is the renewable energy production of present systems is marginal, unsuitable for counteracting the global climate crisis. The conclusion is that, for exploiting the large renewable energy potential, is necessary to implement the plants with an energy storage system able to overcome the source intermittency. Without this improvement, the expectations on renewable energy sources could be disappointed. [it

  19. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  20. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    International Nuclear Information System (INIS)

    Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)

  1. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  2. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  3. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  4. An iOS implementation of the Shannon switching game

    OpenAIRE

    Macík, Miroslav

    2013-01-01

    Shannon switching game is a logical graph game for two players. The game was created by American mathematician Claude Shannon. iOS is an operating system designed for iPhone cellular phone, iPod music player and iPad tablet. The thesis describes existing implementations of the game and also specific implementation for iOS operating system created as a part of this work. This implementation allows you to play against virtual opponent and also supports multiplayer game consisting of two players...

  5. Shannon entropy: A study of confined hydrogenic-like atoms

    Science.gov (United States)

    Nascimento, Wallas S.; Prudente, Frederico V.

    2018-01-01

    The Shannon entropy in the atomic, molecular and chemical physics context is presented by using as test cases the hydrogenic-like atoms Hc, Hec+ and Lic2 + confined by an impenetrable spherical box. Novel expressions for entropic uncertainty relation and Shannon entropies Sr and Sp are proposed to ensure their physical dimensionless characteristic. The electronic ground state energy and the quantities Sr,Sp and St are calculated for the hydrogenic-like atoms to different confinement radii by using a variational method. The global behavior of these quantities and different conjectures are analyzed. The results are compared, when available, with those previously published.

  6. CONSTRUCTION OF REGULAR LDPC LIKE CODES BASED ON FULL RANK CODES AND THEIR ITERATIVE DECODING USING A PARITY CHECK TREE

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2011-09-01

    Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.

  7. Overcoming a limitation of deterministic dense coding with a nonmaximally entangled initial state

    International Nuclear Information System (INIS)

    Bourdon, P. S.; Gerjuoy, E.

    2010-01-01

    Under two-party deterministic dense coding, Alice communicates (perfectly distinguishable) messages to Bob via a qudit from a pair of entangled qudits in pure state |Ψ>. If |Ψ> represents a maximally entangled state (i.e., each of its Schmidt coefficients is √(1/d)), then Alice can convey to Bob one of d 2 distinct messages. If |Ψ> is not maximally entangled, then Ji et al. [Phys. Rev. A 73, 034307 (2006)] have shown that under the original deterministic dense-coding protocol, in which messages are encoded by unitary operations performed on Alice's qudit, it is impossible to encode d 2 -1 messages. Encoding d 2 -2 messages is possible; see, for example, the numerical studies by Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. Answering a question raised by Wu et al. [Phys. Rev. A 73, 042311 (2006)], we show that when |Ψ> is not maximally entangled, the communications limit of d 2 -2 messages persists even when the requirement that Alice encode by unitary operations on her qudit is weakened to allow encoding by more general quantum operators. We then describe a dense-coding protocol that can overcome this limitation with high probability, assuming the largest Schmidt coefficient of |Ψ> is sufficiently close to √(1/d). In this protocol, d 2 -2 of the messages are encoded via unitary operations on Alice's qudit, and the final (d 2 -1)-th message is encoded via a non-trace-preserving quantum operation.

  8. Revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources

    International Nuclear Information System (INIS)

    Wheatley, J. S.

    2004-01-01

    The revised Code of Conduct on the Safety and Security of Radioactive Sources is aimed primarily at Governments, with the objective of achieving and maintaining a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations; and through the fostering of international co-operation. It focuses on sealed radioactive sources and provides guidance on legislation, regulations and the regulatory body, and import/export controls. Nuclear materials (except for sources containing 239Pu), as defined in the Convention on the Physical Protection of Nuclear Materials, are not covered by the revised Code, nor are radioactive sources within military or defence programmes. An earlier version of the Code was published by IAEA in 2001. At that time, agreement was not reached on a number of issues, notably those relating to the creation of comprehensive national registries for radioactive sources, obligations of States exporting radioactive sources, and the possibility of unilateral declarations of support. The need to further consider these and other issues was highlighted by the events of 11th September 2001. Since then, the IAEA's Secretariat has been working closely with Member States and relevant International Organizations to achieve consensus. The text of the revised Code was finalized at a meeting of technical and legal experts in August 2003, and it was submitted to IAEA's Board of Governors for approval in September 2003, with a recommendation that the IAEA General Conference adopt it and encourage its wide implementation. The IAEA General Conference, in September 2003, endorsed the revised Code and urged States to work towards following the guidance contained within it. This paper summarizes the history behind the revised Code, its content and the outcome of the discussions within the IAEA Board of Governors and General Conference. (Author) 8 refs

  9. 40 CFR 401.12 - Law authorizing establishment of effluent limitations guidelines for existing sources, standards...

    Science.gov (United States)

    2010-07-01

    ... effluent limitations guidelines for existing sources, standards of performance for new sources and... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GENERAL PROVISIONS § 401.12 Law authorizing establishment of effluent limitations guidelines for existing sources, standards of performance...

  10. Development of Coupled Interface System between the FADAS Code and a Source-term Evaluation Code XSOR for CANDU Reactors

    International Nuclear Information System (INIS)

    Son, Han Seong; Song, Deok Yong; Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon

    2006-01-01

    An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors

  11. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  12. Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...

  13. Properties of classical and quantum Jensen-Shannon divergence

    NARCIS (Netherlands)

    J. Briët (Jop); P. Harremoës (Peter)

    2009-01-01

    htmlabstractJensen-Shannon divergence (JD) is a symmetrized and smoothed version of the most important divergence measure of information theory, Kullback divergence. As opposed to Kullback divergence it determines in a very direct way a metric; indeed, it is the square of a metric. We consider a

  14. Shannon information entropy in heavy-ion collisions

    Science.gov (United States)

    Ma, Chun-Wang; Ma, Yu-Gang

    2018-03-01

    The general idea of information entropy provided by C.E. Shannon "hangs over everything we do" and can be applied to a great variety of problems once the connection between a distribution and the quantities of interest is found. The Shannon information entropy essentially quantify the information of a quantity with its specific distribution, for which the information entropy based methods have been deeply developed in many scientific areas including physics. The dynamical properties of heavy-ion collisions (HICs) process make it difficult and complex to study the nuclear matter and its evolution, for which Shannon information entropy theory can provide new methods and observables to understand the physical phenomena both theoretically and experimentally. To better understand the processes of HICs, the main characteristics of typical models, including the quantum molecular dynamics models, thermodynamics models, and statistical models, etc., are briefly introduced. The typical applications of Shannon information theory in HICs are collected, which cover the chaotic behavior in branching process of hadron collisions, the liquid-gas phase transition in HICs, and the isobaric difference scaling phenomenon for intermediate mass fragments produced in HICs of neutron-rich systems. Even though the present applications in heavy-ion collision physics are still relatively simple, it would shed light on key questions we are seeking for. It is suggested to further develop the information entropy methods in nuclear reactions models, as well as to develop new analysis methods to study the properties of nuclear matters in HICs, especially the evolution of dynamics system.

  15. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The objective of the code of conduct is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through the fostering of international co-operation. In particular, this code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost. (N.C.)

  16. Particle-in-cell simulation of electron trajectories and irradiation uniformity in an annular cathode high current pulsed electron beam source

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Wei; Wang, Langping, E-mail: aplpwang@hit.edu.cn; Zhou, Guangxue; Wang, Xiaofeng

    2017-02-01

    Highlights: • The transmission process of electrons and irradiation uniformity was simulated. • Influence of the irradiation parameters on irradiation uniformity are discussed. • High irradiation uniformity can be obtained in a wide processing window. - Abstract: In order to study electron trajectories in an annular cathode high current pulsed electron beam (HCPEB) source based on carbon fiber bunches, the transmission process of electrons emitted from the annular cathode was simulated using a particle-in-cell model with Monte Carlo collisions (PIC-MCC). The simulation results show that the intense flow of the electrons emitted from the annular cathode are expanded during the transmission process, and the uniformity of the electron distribution is improved in the transportation process. The irradiation current decreases with the irradiation distance and the pressure, and increases with the negative voltage. In addition, when the irradiation distance and the cathode voltage are larger than 40 mm and −15 kV, respectively, a uniform irradiation current distribution along the circumference of the anode can be obtained. The simulation results show that good irradiation uniformity of circular components can be achieved by this annular cathode HCPEB source.

  17. MOCARS: a Monte Carlo code for determining the distribution and simulation limits

    International Nuclear Information System (INIS)

    Matthews, S.D.

    1977-07-01

    MOCARS is a computer program designed for the INEL CDC 76-173 operating system to determine the distribution and simulation limits for a function by Monte Carlo techniques. The code randomly samples data from any of the 12 user-specified distributions and then either evaluates the cut set system unavailability or a user-specified function with the sample data. After the data are ordered, the values at various quantities and associated confidence bounds are calculated for output. Also available for output on microfilm are the frequency and cumulative distribution histograms from the sample data. 29 figures, 4 tables

  18. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources

    International Nuclear Information System (INIS)

    Gaufridy de Dortan, F. de

    2006-01-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  19. Neutron activation analysis detection limits using 252Cf sources

    International Nuclear Information System (INIS)

    DiPrete, D.P.; Sigg, R.A.

    2000-01-01

    The Savannah River Technology Center (SRTC) developed a neutron activation analysis (NAA) facility several decades ago using low-flux 252 Cf neutron sources. Through this time, the facility has addressed areas of applied interest in managing the Savannah River Site (SRS). Some applications are unique because of the site's operating history and its chemical-processing facilities. Because sensitivity needs for many applications are not severe, they can be accomplished using an ∼6-mg 252 Cf NAA facility. The SRTC 252 Cf facility continues to support applied research programs at SRTC as well as other SRS programs for environmental and waste management customers. Samples analyzed by NAA include organic compounds, metal alloys, sediments, site process solutions, and many other materials. Numerous radiochemical analyses also rely on the facility for production of short-lived tracers, yielding by activation of carriers and small-scale isotope production for separation methods testing. These applications are more fully reviewed in Ref. 1. Although the flux [approximately2 x 10 7 n/cm 2 ·s] is low relative to reactor facilities, more than 40 elements can be detected at low and sub-part-per-million levels. Detection limits provided by the facility are adequate for many analytical projects. Other multielement analysis methods, particularly inductively coupled plasma atomic emission and inductively coupled plasma mass spectrometry, can now provide sensitivities on dissolved samples that are often better than those available by NAA using low-flux isotopic sources. Because NAA allows analysis of bulk samples, (a) it is a more cost-effective choice when its sensitivity is adequate than methods that require digestion and (b) it eliminates uncertainties that can be introduced by digestion processes

  20. Achievable Rates of Cognitive Radio Networks Using Multi-Layer Coding with Limited CSI

    KAUST Repository

    Sboui, Lokman

    2016-03-01

    In a Cognitive Radio (CR) framework, the channel state information (CSI) feedback to the secondary transmitter (SU Tx) can be limited or unavailable. Thus, the statistical model is adopted in order to determine the system performance using the outage concept. In this paper, we adopt a new approach using multi-layer-coding (MLC) strategy, i.e., broadcast approach, to enhance spectrum sharing over fading channels. First, we consider a scenario where the secondary transmitter has no CSI of both the link between SU Tx and the primary receiver (cross-link) and its own link. We show that using MLC improves the cognitive rate compared to the rate provided by a singlelayer- coding (SLC). In addition, we observe numerically that 2-Layer coding achieves most of the gain for Rayleigh fading. Second, we analyze a scenario where SU Tx is provided by partial CSI about its link through quantized CSI. We compute its achievable rate adopting the MLC and highlight the improvement over SLC. Finally, we study the case in which the cross-link is perfect, i.e., a cooperative primary user setting, and compare the performance with the previous cases. We present asymptotic analysis at high power regime and show that the cooperation enhances considerably the cognitive rate at high values of the secondary power budget.

  1. PREFACE: Diagnostics for electrical discharge light sources: pushing the limits Diagnostics for electrical discharge light sources: pushing the limits

    Science.gov (United States)

    Zissis, Georges; Haverlag, Marco

    2010-06-01

    Light sources play an indispensable role in the daily life of any human being. Quality of life, health and urban security related to traffic and crime prevention depend on light and on its quality. In fact, every day approximately 30 billion electric light sources operate worldwide. These electric light sources consume almost 19% of worldwide electricity production. Finding new ways to light lamps is a challenge where the stakes are scientific, technological, economic and environmental. The production of more efficient light sources is a sustainable solution for humanity. There are many opportunities for not only enhancing the efficiency and reliability of lighting systems but also for improving the quality of light as seen by the end user. This is possible through intelligent use of new technologies, deep scientific understanding of the operating principles of light sources and knowledge of the varied human requirements for different types of lighting in different settings. A revolution in the domain of light source technology is on the way: high brightness light emitting diodes arriving in the general lighting market, together with organic LEDs (OLEDs), are producing spectacular advances. However, unlike incandescence, electrical discharge lamps are far from disappearing from the market. In addition, new generations of discharge lamps based on molecular radiators are becoming a reality. There are still many scientific and technological challenges to be raised in this direction. Diagnostics are important for understanding the fundamental mechanisms taking place in the discharge plasma. This understanding is an absolute necessity for system optimization leading to more efficient and high quality light sources. The studied medium is rather complex, but new diagnostic techniques coupled to innovative ideas and powerful tools have been developed in recent years. This cluster issue of seven papers illustrates these efforts. The selected papers cover all domains, from

  2. Optimization of the plasma parameters for the high current and uniform large-scale pulse arc ion source of the VEST-NBI system

    International Nuclear Information System (INIS)

    Jung, Bongki; Park, Min; Heo, Sung Ryul; Kim, Tae-Seong; Jeong, Seung Ho; Chang, Doo-Hee; Lee, Kwang Won; In, Sang-Ryul

    2016-01-01

    Highlights: • High power magnetic bucket-type arc plasma source for the VEST NBI system is developed with modifications based on the prototype plasma source for KSTAR. • Plasma parameters in pulse duration are measured to characterize the plasma source. • High plasma density and good uniformity is achieved at the low operating pressure below 1 Pa. • Required ion beam current density is confirmed by analysis of plasma parameters and results of a particle balance model. - Abstract: A large-scale hydrogen arc plasma source was developed at the Korea Atomic Energy Research Institute for a high power pulsed NBI system of VEST which is a compact spherical tokamak at Seoul national university. One of the research target of VEST is to study innovative tokamak operating scenarios. For this purpose, high current density and uniform large-scale pulse plasma source is required to satisfy the target ion beam power efficiently. Therefore, optimizing the plasma parameters of the ion source such as the electron density, temperature, and plasma uniformity is conducted by changing the operating conditions of the plasma source. Furthermore, ion species of the hydrogen plasma source are analyzed using a particle balance model to increase the monatomic fraction which is another essential parameter for increasing the ion beam current density. Conclusively, efficient operating conditions are presented from the results of the optimized plasma parameters and the extractable ion beam current is calculated.

  3. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  4. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  5. Improvement of uniformity of the negative ion beams by tent-shaped magnetic field in the JT-60 negative ion source

    International Nuclear Information System (INIS)

    Yoshida, Masafumi; Hanada, Masaya; Kojima, Atsushi; Kashiwagi, Mieko; Akino, Noboru; Endo, Yasuei; Komata, Masao; Mogaki, Kazuhiko; Nemoto, Shuji; Ohzeki, Masahiro; Seki, Norikazu; Sasaki, Shunichi; Shimizu, Tatsuo; Terunuma, Yuto; Grisham, Larry R.

    2014-01-01

    Non-uniformity of the negative ion beams in the JT-60 negative ion source with the world-largest ion extraction area was improved by modifying the magnetic filter in the source from the plasma grid (PG) filter to a tent-shaped filter. The magnetic design via electron trajectory calculation showed that the tent-shaped filter was expected to suppress the localization of the primary electrons emitted from the filaments and created uniform plasma with positive ions and atoms of the parent particles for the negative ions. By modifying the magnetic filter to the tent-shaped filter, the uniformity defined as the deviation from the averaged beam intensity was reduced from 14% of the PG filter to ∼10% without a reduction of the negative ion production

  6. Rascal: A domain specific language for source code analysis and manipulation

    NARCIS (Netherlands)

    P. Klint (Paul); T. van der Storm (Tijs); J.J. Vinju (Jurgen); A. Walenstein; S. Schuppe

    2009-01-01

    htmlabstractMany automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This

  7. RASCAL : a domain specific language for source code analysis and manipulationa

    NARCIS (Netherlands)

    Klint, P.; Storm, van der T.; Vinju, J.J.

    2009-01-01

    Many automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This impedance

  8. From system requirements to source code: transitions in UML and RUP

    Directory of Open Access Journals (Sweden)

    Stanisław Wrycza

    2011-06-01

    Full Text Available There are many manuals explaining language specification among UML-related books. Only some of books mentioned concentrate on practical aspects of using the UML language in effective way using CASE tools and RUP. The current paper presents transitions from system requirements specification to structural source code, useful while developing an information system.

  9. Towards an information extraction and knowledge formation framework based on Shannon entropy

    Directory of Open Access Journals (Sweden)

    Iliescu Dragoș

    2017-01-01

    Full Text Available Information quantity subject is approached in this paperwork, considering the specific domain of nonconforming product management as information source. This work represents a case study. Raw data were gathered from a heavy industrial works company, information extraction and knowledge formation being considered herein. Involved method for information quantity estimation is based on Shannon entropy formula. Information and entropy spectrum are decomposed and analysed for extraction of specific information and knowledge-that formation. The result of the entropy analysis point out the information needed to be acquired by the involved organisation, this being presented as a specific knowledge type.

  10. Time-dependent anisotropic external sources in transient 3-D transport code TORT-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    This paper describes the implementation of a time-dependent distributed external source in TORT-TD by explicitly considering the external source in the ''fixed-source'' term of the implicitly time-discretised 3-D discrete ordinates transport equation. Anisotropy of the external source is represented by a spherical harmonics series expansion similar to the angular fluxes. The YALINA-Thermal subcritical assembly serves as a test case. The configuration with 280 fuel rods has been analysed with TORT-TD using cross sections in 18 energy groups and P1 scattering order generated by the KAPROS code system. Good agreement is achieved concerning the multiplication factor. The response of the system to an artificial time-dependent source consisting of two square-wave pulses demonstrates the time-dependent external source capability of TORT-TD. The result is physically plausible as judged from validation calculations. (orig.)

  11. Coded moderator approach for fast neutron source detection and localization at standoff

    Energy Technology Data Exchange (ETDEWEB)

    Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  12. Uncertainties in source term calculations generated by the ORIGEN2 computer code for Hanford Production Reactors

    International Nuclear Information System (INIS)

    Heeb, C.M.

    1991-03-01

    The ORIGEN2 computer code is the primary calculational tool for computing isotopic source terms for the Hanford Environmental Dose Reconstruction (HEDR) Project. The ORIGEN2 code computes the amounts of radionuclides that are created or remain in spent nuclear fuel after neutron irradiation and radioactive decay have occurred as a result of nuclear reactor operation. ORIGEN2 was chosen as the primary code for these calculations because it is widely used and accepted by the nuclear industry, both in the United States and the rest of the world. Its comprehensive library of over 1,600 nuclides includes any possible isotope of interest to the HEDR Project. It is important to evaluate the uncertainties expected from use of ORIGEN2 in the HEDR Project because these uncertainties may have a pivotal impact on the final accuracy and credibility of the results of the project. There are three primary sources of uncertainty in an ORIGEN2 calculation: basic nuclear data uncertainty in neutron cross sections, radioactive decay constants, energy per fission, and fission product yields; calculational uncertainty due to input data; and code uncertainties (i.e., numerical approximations, and neutron spectrum-averaged cross-section values from the code library). 15 refs., 5 figs., 5 tabs

  13. Steganography on quantum pixel images using Shannon entropy

    Science.gov (United States)

    Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.

    2016-07-01

    This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.

  14. A new method for reducing DNL in nuclear ADCs using an interpolation technique

    International Nuclear Information System (INIS)

    Vaidya, P.P.; Gopalakrishnan, K.R.; Pethe, V.A.; Anjaneyulu, T.

    1986-01-01

    The paper describes a new method for reducing the DNL associated with nuclear ADCs. The method named the ''interpolation technique'' is utilized to derive the quantisation steps corresponding to the last n bits of the digital code by dividing quantisation steps due to higher significant bits of the DAC, using a chain of resistors. Using comparators, these quantisation steps are compared with the analog voltage to be digitized, which is applied as a voltage shift at both ends of this chain. The output states of the comparators define the n bit code. The errors due to offset voltages and bias currents of the comparators are statistically neutralized by changing the polarity of quantisation steps as well as the polarity of analog voltage (corresponding to last n bits) for alternate A/D conversion. The effect of averaging on the channel profile can be minimized. A 12 bit ADC was constructured using this technique which gives DNL of less than +-1% over most of the channels for conversion time of nearly 4.5 μs. Gatti's sliding scale technique can be implemented for further reduction of DNL. The interpolation technique has a promising potential of improving the resolution of existing 12 bit ADCs to 16 bit, without degrading the percentage DNL significantly. (orig.)

  15. The Shannon entropy as a measure of diffusion in multidimensional dynamical systems

    Science.gov (United States)

    Giordano, C. M.; Cincotta, P. M.

    2018-05-01

    In the present work, we introduce two new estimators of chaotic diffusion based on the Shannon entropy. Using theoretical, heuristic and numerical arguments, we show that the entropy, S, provides a measure of the diffusion extent of a given small initial ensemble of orbits, while an indicator related with the time derivative of the entropy, S', estimates the diffusion rate. We show that in the limiting case of near ergodicity, after an appropriate normalization, S' coincides with the standard homogeneous diffusion coefficient. The very first application of this formulation to a 4D symplectic map and to the Arnold Hamiltonian reveals very successful and encouraging results.

  16. Above the nominal limit performance evaluation of multiwavelength optical code-division multiple-access systems

    Science.gov (United States)

    Inaty, Elie; Raad, Robert; Fortier, Paul; Shalaby, Hossam M. H.

    2009-03-01

    We provide an analysis for the performance of a multiwavelength optical code-division multiple-access (MW-OCDMA) network when the system is working above the nominal transmission rate limit imposed by passive encoding-decoding operation. We address the problem of overlapping in such a system and how it can directly affect the bit error rate (BER). A unified mathematical framework is presented under the assumption of one-coincidence sequences with nonrepeating wavelengths. A closed form expression of the multiple access interference limited BER is provided as a function of different system parameters. Results show that the performance of the MW-OCDMA system can be critically affected when working above the nominal limit, an event that can happen when the network operates at a high transmission rate. In addition, the impact of the derived error probability on the performance of two newly proposed medium access control (MAC) protocols, the S-ALOHA and the R3T, is also investigated. It is shown that for low transmission rates, the S-ALOHA is better than the R3T, while the R3T is better at very high transmission rates. In general, it is postulated that the R3T protocol suffers a higher delay mainly because of the presence of additional modes.

  17. Diagnostics for electrical discharge light sources : pushing the limits

    NARCIS (Netherlands)

    Zissis, G.; Haverlag, M.

    2010-01-01

    Light sources play an indispensable role in the daily life of any human being. Quality of life, health and urban security related to traffic and crime prevention depend on light and on its quality. In fact, every day approximately 30 billion electric light sources operate worldwide. These electric

  18. Application of Shannon Wavelet Entropy and Shannon Wavelet Packet Entropy in Analysis of Power System Transient Signals

    Directory of Open Access Journals (Sweden)

    Jikai Chen

    2016-12-01

    Full Text Available In a power system, the analysis of transient signals is the theoretical basis of fault diagnosis and transient protection theory. Shannon wavelet entropy (SWE and Shannon wavelet packet entropy (SWPE are powerful mathematics tools for transient signal analysis. Combined with the recent achievements regarding SWE and SWPE, their applications are summarized in feature extraction of transient signals and transient fault recognition. For wavelet aliasing at adjacent scale of wavelet decomposition, the impact of wavelet aliasing is analyzed for feature extraction accuracy of SWE and SWPE, and their differences are compared. Meanwhile, the analyses mentioned are verified by partial discharge (PD feature extraction of power cable. Finally, some new ideas and further researches are proposed in the wavelet entropy mechanism, operation speed and how to overcome wavelet aliasing.

  19. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  20. Imaging x-ray sources at a finite distance in coded-mask instruments

    International Nuclear Information System (INIS)

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction

  1. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  2. Strategies for source space limitation in tomographic inverse procedures

    International Nuclear Information System (INIS)

    George, J.S.; Lewis, P.S.; Schlitt, H.A.; Kaplan, L.; Gorodnitsky, I.; Wood, C.C.

    1994-01-01

    The use of magnetic recordings for localization of neural activity requires the solution of an ill-posed inverse problem: i.e. the determination of the spatial configuration, orientation, and timecourse of the currents that give rise to a particular observed field distribution. In its general form, this inverse problem has no unique solution; due to superposition and the existence of silent source configurations, a particular magnetic field distribution at the head surface could be produced by any number of possible source configurations. However, by making assumptions concerning the number and properties of neural sources, it is possible to use numerical minimization techniques to determine the source model parameters that best account for the experimental observations while satisfying numerical or physical criteria. In this paper the authors describe progress on the development and validation of inverse procedures that produce distributed estimates of neuronal currents. The goal is to produce a temporal sequence of 3-D tomographic reconstructions of the spatial patterns of neural activation. Such approaches have a number of advantages, in principle. Because they do not require estimates of model order and parameter values (beyond specification of the source space), they minimize the influence of investigator decisions and are suitable for automated analyses. These techniques also allow localization of sources that are not point-like; experimental studies of cognitive processes and of spontaneous brain activity are likely to require distributed source models

  3. 75 FR 10438 - Effluent Limitations Guidelines and Standards for the Construction and Development Point Source...

    Science.gov (United States)

    2010-03-08

    ... Effluent Limitations Guidelines and Standards for the Construction and Development Point Source Category... technology-based Effluent Limitations Guidelines and New Source Performance Standards for the Construction... technology-based Effluent Limitations Guidelines and New Source Performance Standards for the Construction...

  4. A plug-in to Eclipse for VHDL source codes: functionalities

    Science.gov (United States)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  5. Sources for high frequency heating. Performance and limitations

    International Nuclear Information System (INIS)

    Le Gardeur, R.

    1976-01-01

    The various problems encountered in high frequency heating of plasmas can be decomposed into three spheres of action: theoretical development, antenna designing, and utilization of power sources. By classifying heating into three spectral domains, present and future needs are enumerated. Several specific antenna designs are treated. High frequency power sources are reviewed. The actual development of the gyratron is discussed in view of future needs in very high frequency heating of plasmas [fr

  6. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    Science.gov (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  7. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  8. Space-Frequency Block Code with Matched Rotation for MIMO-OFDM System with Limited Feedback

    Directory of Open Access Journals (Sweden)

    Thushara D. Abhayapala

    2009-01-01

    Full Text Available This paper presents a novel matched rotation precoding (MRP scheme to design a rate one space-frequency block code (SFBC and a multirate SFBC for MIMO-OFDM systems with limited feedback. The proposed rate one MRP and multirate MRP can always achieve full transmit diversity and optimal system performance for arbitrary number of antennas, subcarrier intervals, and subcarrier groupings, with limited channel knowledge required by the transmit antennas. The optimization process of the rate one MRP is simple and easily visualized so that the optimal rotation angle can be derived explicitly, or even intuitively for some cases. The multirate MRP has a complex optimization process, but it has a better spectral efficiency and provides a relatively smooth balance between system performance and transmission rate. Simulations show that the proposed SFBC with MRP can overcome the diversity loss for specific propagation scenarios, always improve the system performance, and demonstrate flexible performance with large performance gain. Therefore the proposed SFBCs with MRP demonstrate flexibility and feasibility so that it is more suitable for a practical MIMO-OFDM system with dynamic parameters.

  9. Survey of source code metrics for evaluating testability of object oriented systems

    OpenAIRE

    Shaheen , Muhammad Rabee; Du Bousquet , Lydie

    2010-01-01

    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...

  10. NEACRP comparison of source term codes for the radiation protection assessment of transportation packages

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Locke, H.F.; Avery, A.F.

    1994-01-01

    The results for Problems 5 and 6 of the NEACRP code comparison as submitted by six participating countries are presented in summary. These problems concentrate on the prediction of the neutron and gamma-ray sources arising in fuel after a specified irradiation, the fuel being uranium oxide for problem 5 and a mixture of uranium and plutonium oxides for problem 6. In both problems the predicted neutron sources are in good agreement for all participants. For gamma rays, however, there are differences, largely due to the omission of bremsstrahlung in some calculations

  11. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  12. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  13. Class of near-perfect coded apertures

    International Nuclear Information System (INIS)

    Cannon, T.M.; Fenimore, E.E.

    1977-01-01

    Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method. It is shown that the reconstructed image in the URA system contains virtually uniform noise regardless of the structure in the original source. Therefore, the improvement over a single pinhole camera will be relatively larger for the brighter points in the source than for the low intensity points. In the case of a large detector background noise the URA will always do much better than the single pinhole regardless of the structure of the object. In the case of a low detector background noise, the improvement of the URA over the single pinhole will have a lower limit of approximately (1/2f)/sup 1 / 2 / where f is the fraction of the field of view which is uniformly filled by the object

  14. Impact of optical hard limiter on the performance of an optical overlapped-code division multiple access system

    Science.gov (United States)

    Inaty, Elie; Raad, Robert; Tablieh, Nicole

    2011-08-01

    Throughout this paper, a closed form expression of the multiple access interference (MAI) limited bit error rate (BER) is provided for the multiwavelength optical code-division multiple-access system when the system is working above the nominal transmission rate limit imposed by the passive encoding-decoding operation. This system is known in literature as the optical overlapped code division multiple access (OV-CDMA) system. A unified analytical framework is presented emphasizing the impact of optical hard limiter (OHL) on the BER performance of such a system. Results show that the performance of the OV-CDMA system may be highly improved when using OHL preprocessing at the receiver side.

  15. Probabilistic uniformities of uniform spaces

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Lopez, J.; Romaguera, S.; Sanchis, M.

    2017-07-01

    The theory of metric spaces in the fuzzy context has shown to be an interesting area of study not only from a theoretical point of view but also for its applications. Nevertheless, it is usual to consider these spaces as classical topological or uniform spaces and there are not too many results about constructing fuzzy topological structures starting from a fuzzy metric. Maybe, H/{sup o}hle was the first to show how to construct a probabilistic uniformity and a Lowen uniformity from a probabilistic pseudometric /cite{Hohle78,Hohle82a}. His method can be directly translated to the context of fuzzy metrics and allows to characterize the categories of probabilistic uniform spaces or Lowen uniform spaces by means of certain families of fuzzy pseudometrics /cite{RL}. On the other hand, other different fuzzy uniformities can be constructed in a fuzzy metric space: a Hutton $[0,1]$-quasi-uniformity /cite{GGPV06}; a fuzzifiying uniformity /cite{YueShi10}, etc. The paper /cite{GGRLRo} gives a study of several methods of endowing a fuzzy pseudometric space with a probabilistic uniformity and a Hutton $[0,1]$-quasi-uniformity. In 2010, J. Guti/'errez Garc/'{/i}a, S. Romaguera and M. Sanchis /cite{GGRoSanchis10} proved that the category of uniform spaces is isomorphic to a category formed by sets endowed with a fuzzy uniform structure, i. e. a family of fuzzy pseudometrics satisfying certain conditions. We will show here that, by means of this isomorphism, we can obtain several methods to endow a uniform space with a probabilistic uniformity. Furthermore, these constructions allow to obtain a factorization of some functors introduced in /cite{GGRoSanchis10}. (Author)

  16. Implementation of generalized quantum measurements: Superadditive quantum coding, accessible information extraction, and classical capacity limit

    International Nuclear Information System (INIS)

    Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun; Sasaki, Masahide

    2004-01-01

    Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decoding in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques

  17. Multiscale Shannon entropy and its application in the stock market

    Science.gov (United States)

    Gu, Rongbao

    2017-10-01

    In this paper, we perform a multiscale entropy analysis on the Dow Jones Industrial Average Index using the Shannon entropy. The stock index shows the characteristic of multi-scale entropy that caused by noise in the market. The entropy is demonstrated to have significant predictive ability for the stock index in both long-term and short-term, and empirical results verify that noise does exist in the market and can affect stock price. It has important implications on market participants such as noise traders.

  18. Design, fabrication, and calibration of curved integral coils for measuring transfer function, uniformity, and effective length of LBL ALS [Lawrence Berkeley Laboratory Advanced Light Source] Booster Dipole Magnets

    International Nuclear Information System (INIS)

    Green, M.I.; Nelson, D.; Marks, S.; Gee, B.; Wong, W.; Meneghetti, J.

    1989-03-01

    A matched pair of curved integral coils has been designed, fabricated and calibrated at Lawrence Berkeley Laboratory for measuring Advanced Light Source (ALS) Booster Dipole Magnets. Distinctive fabrication and calibration techniques are described. The use of multifilar magnet wire in fabrication integral search coils is described. Procedures used and results of AC and DC measurements of transfer function, effective length and uniformity of the prototype booster dipole magnet are presented in companion papers. 8 refs

  19. Performance analysis of 2D asynchronous hard-limiting optical code-division multiple access system through atmospheric scattering channel

    Science.gov (United States)

    Zhao, Yaqin; Zhong, Xin; Wu, Di; Zhang, Ye; Ren, Guanghui; Wu, Zhilu

    2013-09-01

    Optical code-division multiple access (OCDMA) systems usually allocate orthogonal or quasi-orthogonal codes to the active users. When transmitting through atmospheric scattering channel, the coding pulses are broadened and the orthogonality of the codes is worsened. In truly asynchronous case, namely both the chips and the bits are asynchronous among each active user, the pulse broadening affects the system performance a lot. In this paper, we evaluate the performance of a 2D asynchronous hard-limiting wireless OCDMA system through atmospheric scattering channel. The probability density function of multiple access interference in truly asynchronous case is given. The bit error rate decreases as the ratio of the chip period to the root mean square delay spread increases and the channel limits the bit rate to different levels when the chip period varies.

  20. Limitation of population's irradiation by natural sources of ionizing radiation

    International Nuclear Information System (INIS)

    Krisyuk, Eh.M.

    1989-01-01

    Review of works devoted to evaluating the human irradiation doses at the expense of the main sources of ionizing radiation, is given. It is shown that the human irradiation doses at the expense of DDP can be reduced 10 times and more. However to realize such measures it is necessary to study the efficiency and determine the cost of various protective activities as well as to develop the criteria of their realization necessity

  1. Application of the source term code package to obtain a specific source term for the Laguna Verde Nuclear Power Plant

    International Nuclear Information System (INIS)

    Souto, F.J.

    1991-06-01

    The main objective of the project was to use the Source Term Code Package (STCP) to obtain a specific source term for those accident sequences deemed dominant as a result of probabilistic safety analyses (PSA) for the Laguna Verde Nuclear Power Plant (CNLV). The following programme has been carried out to meet this objective: (a) implementation of the STCP, (b) acquisition of specific data for CNLV to execute the STCP, and (c) calculations of specific source terms for accident sequences at CNLV. The STCP has been implemented and validated on CDC 170/815 and CDC 180/860 main frames as well as on a Micro VAX 3800 system. In order to get a plant-specific source term, data on the CNLV including initial core inventory, burn-up, primary containment structures, and materials used for the calculations have been obtained. Because STCP does not explicitly model containment failure, dry well failure in the form of a catastrophic rupture has been assumed. One of the most significant sequences from the point of view of possible off-site risk is the loss of off-site power with failure of the diesel generators and simultaneous loss of high pressure core spray and reactor core isolation cooling systems. The probability for that event is approximately 4.5 x 10 -6 . This sequence has been analysed in detail and the release fractions of radioisotope groups are given in the full report. 18 refs, 4 figs, 3 tabs

  2. The European source term code ESTER - basic ideas and tools for coupling of ATHLET and ESTER

    International Nuclear Information System (INIS)

    Schmidt, F.; Schuch, A.; Hinkelmann, M.

    1993-04-01

    The French software house CISI and IKE of the University of Stuttgart have developed during 1990 and 1991 in the frame of the Shared Cost Action Reactor Safety the informatic structure of the European Source TERm Evaluation System (ESTER). Due to this work tools became available which allow to unify on an European basis both code development and code application in the area of severe core accident research. The behaviour of reactor cores is determined by thermal hydraulic conditions. Therefore for the development of ESTER it was important to investigate how to integrate thermal hydraulic code systems with ESTER applications. This report describes the basic ideas of ESTER and improvements of ESTER tools in view of a possible coupling of the thermal hydraulic code system ATHLET and ESTER. Due to the work performed during this project the ESTER tools became the most modern informatic tools presently available in the area of severe accident research. A sample application is given which demonstrates the use of the new tools. (orig.) [de

  3. The potential and limitations of third generation light sources

    International Nuclear Information System (INIS)

    Hormes, Josef

    2011-01-01

    To date, 3rd generation Light Sources, i.e. electron storage rings where mainly radiation from insertion devices (wigglers and undulators) is used for synchrotron radiation experiments are the 'workhorses' for basic and applied VUV/X-ray research. Several machine parameters. i.e. the energy of the electrons, the emittance and the circumference of the machine, together with the specification of the corresponding insertion devices determine the 'quality' of a facility and a specific beamline. In this talk, several of these aspects are discussed mainly from a users' point of view, i.e. what are the required specifications to carry out 'state-of-the-art' experiments in various areas, e.g. protein crystallography, Resonant Elastic and Inelastic X-ray Scattering (REIXS), Micro-/nanospectroscopy, and time resolved experiments in the femtosecond time domain. (author)

  4. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  5. Chronos sickness: digital reality in Duncan Jones’s Source Code

    Directory of Open Access Journals (Sweden)

    Marcia Tiemy Morita Kawamoto

    2017-01-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2017v70n1p249 The advent of the digital technologies unquestionably affected the cinema. The indexical relation and realistic effect with the photographed world much praised by André Bazin and Roland Barthes is just one of the affected aspects. This article discusses cinema in light of the new digital possibilities, reflecting on Steven Shaviro’s consideration of “how a nonindexical realism might be possible” (63 and how in fact a new kind of reality, a digital one, might emerge in the science fiction film Source Code (2013 by Duncan Jones.

  6. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  7. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  8. Coded aperture detector for high precision gamma-ray burst source locations

    International Nuclear Information System (INIS)

    Helmken, H.; Gorenstein, P.

    1977-01-01

    Coded aperture collimators in conjunction with position-sensitive detectors are very useful in the study of transient phenomenon because they combine broad field of view, high sensitivity, and an ability for precise source locations. Since the preceeding conference, a series of computer simulations of various detector designs have been carried out with the aid of a CDC 6400. Particular emphasis was placed on the development of a unit consisting of a one-dimensional random or periodic collimator in conjunction with a two-dimensional position-sensitive Xenon proportional counter. A configuration involving four of these units has been incorporated into the preliminary design study of the Transient Explorer (ATREX) satellite and are applicable to any SAS or HEAO type satellite mission. Results of this study, including detector response, fields of view, and source location precision, will be presented

  9. Kinetics of the Dynamical Information Shannon Entropy for Complex Systems

    International Nuclear Information System (INIS)

    Yulmetyev, R.M.; Yulmetyeva, D.G.

    1999-01-01

    Kinetic behaviour of dynamical information Shannon entropy is discussed for complex systems: physical systems with non-Markovian property and memory in correlation approximation, and biological and physiological systems with sequences of the Markovian and non-Markovian random noises. For the stochastic processes, a description of the information entropy in terms of normalized time correlation functions is given. The influence and important role of two mutually dependent channels of the entropy change, correlation (creation or generation of correlations) and anti-correlation (decay or annihilation of correlation) is discussed. The method developed here is also used in analysis of the density fluctuations in liquid cesium obtained from slow neutron scattering data, fractal kinetics of the long-range fluctuation in the short-time human memory and chaotic dynamics of R-R intervals of human ECG. (author)

  10. PRIMUS: a computer code for the preparation of radionuclide ingrowth matrices from user-specified sources

    International Nuclear Information System (INIS)

    Hermann, O.W.; Baes, C.F. III; Miller, C.W.; Begovich, C.L.; Sjoreen, A.L.

    1984-10-01

    The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references

  11. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    Science.gov (United States)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  12. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  13. Multi-Level Wavelet Shannon Entropy-Based Method for Single-Sensor Fault Location

    Directory of Open Access Journals (Sweden)

    Qiaoning Yang

    2015-10-01

    Full Text Available In actual application, sensors are prone to failure because of harsh environments, battery drain, and sensor aging. Sensor fault location is an important step for follow-up sensor fault detection. In this paper, two new multi-level wavelet Shannon entropies (multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are defined. They take full advantage of sensor fault frequency distribution and energy distribution across multi-subband in wavelet domain. Based on the multi-level wavelet Shannon entropy, a method is proposed for single sensor fault location. The method firstly uses a criterion of maximum energy-to-Shannon entropy ratio to select the appropriate wavelet base for signal analysis. Then multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are used to locate the fault. The method is validated using practical chemical gas concentration data from a gas sensor array. Compared with wavelet time Shannon entropy and wavelet energy Shannon entropy, the experimental results demonstrate that the proposed method can achieve accurate location of a single sensor fault and has good anti-noise ability. The proposed method is feasible and effective for single-sensor fault location.

  14. Effect of heat transfer on unsteady MHD flow of blood in a permeable vessel in the presence of non-uniform heat source

    OpenAIRE

    A. Sinha; J.C. Misra; G.C. Shit

    2016-01-01

    This paper presents a theoretical analysis of blood flow and heat transfer in a permeable vessel in the presence of an external magnetic field. The unsteadiness in the coupled flow and temperature fields is considered to be caused due to the time-dependent stretching velocity and the surface temperature of the vessel. The non-uniform heat source/sink effect on blood flow and heat transfer is taken into account. This study is of potential value in the clinical treatment of cardiovascular disor...

  15. Code of practice for the control and safe handling of radioactive sources used for therapeutic purposes (1988)

    International Nuclear Information System (INIS)

    1988-01-01

    This Code is intended as a guide to safe practices in the use of sealed and unsealed radioactive sources and in the management of patients being treated with them. It covers the procedures for the handling, preparation and use of radioactive sources, precautions to be taken for patients undergoing treatment, storage and transport of radioactive sources within a hospital or clinic, and routine testing of sealed sources [fr

  16. Thermal-hydraulic code for estimating safety limits of nuclear reactors with plate type fuels

    Energy Technology Data Exchange (ETDEWEB)

    Castellanos, Duvan A.; Moreira, João L.; Maiorino, Jose R.; Rossi, Pedro R.; Carajilescov, Pedro, E-mail: duvan.castellanos@ufabc.edu.br, E-mail: joao.moreira@ufabc.edu.br, E-mail: joserubens.maiorino@ufabc.edu.br, E-mail: pedro.rossi@ufabc.edu.br, E-mail: pedro.carajilescov10@gmail.com [Universidade Federal do ABC (UFABC), Santo André, SP (Brazil). Centro de Engenharias, Modelagem e Ciências Sociais Aplicadas

    2017-07-01

    To ensure the normal and safe operation of PWR type nuclear reactors is necessary the knowledge of nuclear and heat transfer properties of the fuel, coolant and structural materials. The thermal-hydraulic analysis of nuclear reactors yields parameters such as the distribution of fuel and coolant temperatures, and the departure from nucleated boiling ratio. Usually computational codes are used to analyze the safety performance of the core. This research work presents a computer code for performing thermal-hydraulic analyses of nuclear reactors with plate-type fuel elements operating at low pressure and temperature (research reactors) or high temperature and pressure (naval propulsion or small power reactors). The code uses the sub-channel method based on geometric and thermal-hydraulic conditions. In order to solve the conservation equations for mass, momentum and energy, each sub-channel is divided into control volumes in the axial direction. The mass flow distribution for each fuel element of core is obtained. Analysis of critical heat flux is performed in the hottest channel. The code considers the radial symmetry and the chain or cascade method for two steps in order to facilitate the whole analysis. In the first step, we divide the core into channels with size equivalent to a fuel assembly. >From this analysis, the channel with the largest enthalpy is identified as the hot assembly. In the second step, we divide the hottest fuel assembly into sub-channels with size equivalent to one actual coolant channel. As in the previous step, the sub-channel with largest final enthalpy is identified as the hottest sub-channel. For the code validation, we considered results from the chinese CARR research reactor. The code reproduced well the CARR reactor results, yielding detailed information such as static pressure in the channel, mass flow rate distribution among the fuel channels, coolant, clad and centerline fuel temperatures, quality and local heat and critical heat

  17. Thermal-hydraulic code for estimating safety limits of nuclear reactors with plate type fuels

    International Nuclear Information System (INIS)

    Castellanos, Duvan A.; Moreira, João L.; Maiorino, Jose R.; Rossi, Pedro R.; Carajilescov, Pedro

    2017-01-01

    To ensure the normal and safe operation of PWR type nuclear reactors is necessary the knowledge of nuclear and heat transfer properties of the fuel, coolant and structural materials. The thermal-hydraulic analysis of nuclear reactors yields parameters such as the distribution of fuel and coolant temperatures, and the departure from nucleated boiling ratio. Usually computational codes are used to analyze the safety performance of the core. This research work presents a computer code for performing thermal-hydraulic analyses of nuclear reactors with plate-type fuel elements operating at low pressure and temperature (research reactors) or high temperature and pressure (naval propulsion or small power reactors). The code uses the sub-channel method based on geometric and thermal-hydraulic conditions. In order to solve the conservation equations for mass, momentum and energy, each sub-channel is divided into control volumes in the axial direction. The mass flow distribution for each fuel element of core is obtained. Analysis of critical heat flux is performed in the hottest channel. The code considers the radial symmetry and the chain or cascade method for two steps in order to facilitate the whole analysis. In the first step, we divide the core into channels with size equivalent to a fuel assembly. >From this analysis, the channel with the largest enthalpy is identified as the hot assembly. In the second step, we divide the hottest fuel assembly into sub-channels with size equivalent to one actual coolant channel. As in the previous step, the sub-channel with largest final enthalpy is identified as the hottest sub-channel. For the code validation, we considered results from the chinese CARR research reactor. The code reproduced well the CARR reactor results, yielding detailed information such as static pressure in the channel, mass flow rate distribution among the fuel channels, coolant, clad and centerline fuel temperatures, quality and local heat and critical heat

  18. A Source Term Calculation for the APR1400 NSSS Auxiliary System Components Using the Modified SHIELD Code

    International Nuclear Information System (INIS)

    Park, Hong Sik; Kim, Min; Park, Seong Chan; Seo, Jong Tae; Kim, Eun Kee

    2005-01-01

    The SHIELD code has been used to calculate the source terms of NSSS Auxiliary System (comprising CVCS, SIS, and SCS) components of the OPR1000. Because the code had been developed based upon the SYSTEM80 design and the APR1400 NSSS Auxiliary System design is considerably changed from that of SYSTEM80 or OPR1000, the SHIELD code cannot be used directly for APR1400 radiation design. Thus the hand-calculation is needed for the portion of design changes using the results of the SHIELD code calculation. In this study, the SHIELD code is modified to incorporate the APR1400 design changes and the source term calculation is performed for the APR1400 NSSS Auxiliary System components

  19. FIRST EXPERIMENTAL RESULTS FROM DEGAS, THE QUANTUM LIMITED BRIGHTNESS ELECTRON SOURCE

    International Nuclear Information System (INIS)

    Zolotorev, Max S.; Commins, Eugene D.; Oneill, James; Sannibale, Fernando; Tremsin, Anton; Wan, Weishi

    2008-01-01

    The construction of DEGAS (DEGenerate Advanced Source), a proof of principle for a quantum limited brightness electron source, has been completed at the Lawrence Berkeley National Laboratory. The commissioning and the characterization of this source, designed to generate coherent single electron 'bunches' with brightness approaching the quantum limit at a repetition rate of few MHz, has been started. In this paper the first experimental results are described

  20. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  1. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  2. Carbon source-sink limitations differ between two species with contrasting growth strategies.

    Science.gov (United States)

    Burnett, Angela C; Rogers, Alistair; Rees, Mark; Osborne, Colin P

    2016-11-01

    Understanding how carbon source and sink strengths limit plant growth is a critical knowledge gap that hinders efforts to maximize crop yield. We investigated how differences in growth rate arise from source-sink limitations, using a model system comparing a fast-growing domesticated annual barley (Hordeum vulgare cv. NFC Tipple) with a slow-growing wild perennial relative (Hordeum bulbosum). Source strength was manipulated by growing plants at sub-ambient and elevated CO 2 concentrations ([CO 2 ]). Limitations on vegetative growth imposed by source and sink were diagnosed by measuring relative growth rate, developmental plasticity, photosynthesis and major carbon and nitrogen metabolite pools. Growth was sink limited in the annual but source limited in the perennial. RGR and carbon acquisition were higher in the annual, but photosynthesis responded weakly to elevated [CO 2 ] indicating that source strength was near maximal at current [CO 2 ]. In contrast, photosynthetic rate and sink development responded strongly to elevated [CO 2 ] in the perennial, indicating significant source limitation. Sink limitation was avoided in the perennial by high sink plasticity: a marked increase in tillering and root:shoot ratio at elevated [CO 2 ], and lower non-structural carbohydrate accumulation. Alleviating sink limitation during vegetative development could be important for maximizing growth of elite cereals under future elevated [CO 2 ]. © 2016 John Wiley & Sons Ltd.

  3. Tomographical properties of uniformly redundant arrays

    International Nuclear Information System (INIS)

    Cannon, T.M.; Fenimore, E.E.

    1978-01-01

    Recent work in coded aperture imaging has shown that the uniformly redundant array (URA) can image distant planar radioactive sources with no artifacts. The performance of two URA apertures when used in a close-up tomographic imaging system is investigated. It is shown that a URA based on m sequences is superior to one based on quadratic residues. The m sequence array not only produces less obnoxious artifacts in tomographic imaging, but is also more resilient to some described detrimental effects of close-up imaging. It is shown that in spite of these close-up effects, tomographic depth resolution increases as the source is moved closer to the detector

  4. A super-high angular resolution principle for coded-mask X-ray imaging beyond the diffraction limit of a single pinhole

    International Nuclear Information System (INIS)

    Zhang Chen; Zhang Shuangnan

    2009-01-01

    High angular resolution X-ray imaging is always useful in astrophysics and solar physics. In principle, it can be performed by using coded-mask imaging with a very long mask-detector distance. Previously, the diffraction-interference effect was thought to degrade coded-mask imaging performance dramatically at the low energy end with its very long mask-detector distance. The diffraction-interference effect is described with numerical calculations, and the diffraction-interference cross correlation reconstruction method (DICC) is developed in order to overcome the imaging performance degradation. Based on the DICC, a super-high angular resolution principle (SHARP) for coded-mask X-ray imaging is proposed. The feasibility of coded mask imaging beyond the diffraction limit of a single pinhole is demonstrated with simulations. With the specification that the mask element size is 50 x 50 μm 2 and the mask-detector distance is 50 m, the achieved angular resolution is 0.32 arcsec above about 10 keV and 0.36 arcsec at 1.24 keV (λ = 1 nm), where diffraction cannot be neglected. The on-axis source location accuracy is better than 0.02 arcsec. Potential applications for solar observations and wide-field X-ray monitors are also briefly discussed. (invited reviews)

  5. The consequences of multiplexing and limited view angle in coded-aperture imaging

    International Nuclear Information System (INIS)

    Smith, W.E.; Barrett, H.H.; Paxman, R.G.

    1984-01-01

    Coded-aperture imaging (CAI) is a method for reconstructing distributions of radionuclide tracers that offers advantages over ECT and PET; namely, many views can be taken simultaneously without detector motion, and large numbers of photons are utilized since collimators are not required. However, because of this type of data acquisition, the coded image suffers from multiplexing; i.e., more than one object point may be mapped to each detector in the coded image. To investigate the dependence of the reconstruction on multiplexing, the authors reconstruct a simulated two-dimensional circular object from multiplexed one-dimensional coded-image data, then perform the reconstruction from un-multiplexed data. Each of these reconstructions are produced both from noise-free and noisy simulated data. To investigate the dependence on view angle, the authors reconstruct two simulated three-dimensional objects; a spherical phantom, and a series of point-like objects arranged nearly in a plane. Each of these reconstructions are from multiplexed two-dimensional coded-image data, first using two orthogonal views, and then a single viewing direction. The two-dimensional reconstructions demonstrate that, in the noise-free case, the multiplexing of the data does not seriously affect the reconstruction equality and that in the noisy-data case, the multiplexing helps, due to the fact that more photons are collected. Also, for point-like objects confined to a near-planar region of space, the authors show that restricted views can give satisfactory results, but that, for a large, three-dimensional object, a more complete viewing geometry is required

  6. Upper limits on the total cosmic-ray luminosity of individual sources

    Energy Technology Data Exchange (ETDEWEB)

    Anjos, R.C.; De Souza, V. [Instituto de Física de São Carlos, Universidade de São Paulo, São Paulo (Brazil); Supanitsky, A.D., E-mail: rita@ifsc.usp.br, E-mail: vitor@ifsc.usp.br, E-mail: supanitsky@iafe.uba.ar [Instituto de Astronomía y Física del Espacio (IAFE), CONICET-UBA, Buenos Aires (Argentina)

    2014-07-01

    In this paper, upper limits on the total luminosity of ultra-high-energy cosmic-rays (UHECR) E > 10{sup 18} eV) are determined for five individual sources. The upper limit on the integral flux of GeV--TeV gamma-rays is used to extract the upper limit on the total UHECR luminosity of individual sources. The correlation between upper limit on the integral GeV--TeV gamma-ray flux and upper limit on the UHECR luminosity is established through the cascading process that takes place during propagation of the cosmic rays in the background radiation fields, as explained in reference [1]. Twenty-eight sources measured by FERMI-LAT, VERITAS and MAGIC observatories have been studied. The measured upper limit on the GeV--TeV gamma-ray flux is restrictive enough to allow the calculation of an upper limit on the total UHECR cosmic-ray luminosity of five sources. The upper limit on the UHECR cosmic-ray luminosity of these sources is shown for several assumptions on the emission mechanism. For all studied sources an upper limit on the ultra-high-energy proton luminosity is also set.

  7. Upper limits on the total cosmic-ray luminosity of individual sources

    International Nuclear Information System (INIS)

    Anjos, R.C.; De Souza, V.; Supanitsky, A.D.

    2014-01-01

    In this paper, upper limits on the total luminosity of ultra-high-energy cosmic-rays (UHECR) E > 10 18 eV) are determined for five individual sources. The upper limit on the integral flux of GeV--TeV gamma-rays is used to extract the upper limit on the total UHECR luminosity of individual sources. The correlation between upper limit on the integral GeV--TeV gamma-ray flux and upper limit on the UHECR luminosity is established through the cascading process that takes place during propagation of the cosmic rays in the background radiation fields, as explained in reference [1]. Twenty-eight sources measured by FERMI-LAT, VERITAS and MAGIC observatories have been studied. The measured upper limit on the GeV--TeV gamma-ray flux is restrictive enough to allow the calculation of an upper limit on the total UHECR cosmic-ray luminosity of five sources. The upper limit on the UHECR cosmic-ray luminosity of these sources is shown for several assumptions on the emission mechanism. For all studied sources an upper limit on the ultra-high-energy proton luminosity is also set

  8. A satellite mobile communication system based on Band-Limited Quasi-Synchronous Code Division Multiple Access (BLQS-CDMA)

    Science.gov (United States)

    Degaudenzi, R.; Elia, C.; Viola, R.

    1990-01-01

    Discussed here is a new approach to code division multiple access applied to a mobile system for voice (and data) services based on Band Limited Quasi Synchronous Code Division Multiple Access (BLQS-CDMA). The system requires users to be chip synchronized to reduce the contribution of self-interference and to make use of voice activation in order to increase the satellite power efficiency. In order to achieve spectral efficiency, Nyquist chip pulse shaping is used with no detection performance impairment. The synchronization problems are solved in the forward link by distributing a master code, whereas carrier forced activation and closed loop control techniques have been adopted in the return link. System performance sensitivity to nonlinear amplification and timing/frequency synchronization errors are analyzed.

  9. Living Up to the Code's Exhortations? Social Workers' Political Knowledge Sources, Expectations, and Behaviors.

    Science.gov (United States)

    Felderhoff, Brandi Jean; Hoefer, Richard; Watson, Larry Dan

    2016-01-01

    The National Association of Social Workers' (NASW's) Code of Ethics urges social workers to engage in political action. However, little recent research has been conducted to examine whether social workers support this admonition and the extent to which they actually engage in politics. The authors gathered data from a survey of social workers in Austin, Texas, to address three questions. First, because keeping informed about government and political news is an important basis for action, the authors asked what sources of knowledge social workers use. Second, they asked what the respondents believe are appropriate political behaviors for other social workers and NASW. Third, they asked for self-reports regarding respondents' own political behaviors. Results indicate that social workers use the Internet and traditional media services to stay informed; expect other social workers and NASW to be active; and are, overall, more active than the general public in many types of political activities. The comparisons made between expectations for others and their own behaviors are interesting in their complex outcomes. Social workers should strive for higher levels of adherence to the code's urgings on political activity. Implications for future work are discussed.

  10. Effect of heat radiation in a Walter’s liquid B fluid over a stretching sheet with non-uniform heat source/sink and elastic deformation

    Directory of Open Access Journals (Sweden)

    A.K. Abdul Hakeem

    2014-07-01

    Full Text Available In this present article heat transfer in a Walter’s liquid B fluid over an impermeable stretching sheet with non-uniform heat source/sink, elastic deformation and radiation are reported. The basic boundary layer equations for momentum and heat transfer, which are non-linear partial differential equations, are converted into non-linear ordinary differential equations by means of similarity transformation. The dimensionless governing equations for this investigation are solved analytically using hyper geometric functions. The results are carried out for prescribed surface temperature (PST and prescribed power law surface heat flux (PHF. The effects of viscous dissipation, Prandtl number, Eckert number, heat source/sink parameter with elastic deformation and radiation are shown in the several plots and addressed.

  11. RIES - Rijnland Internet Election System: A Cursory Study of Published Source Code

    Science.gov (United States)

    Gonggrijp, Rop; Hengeveld, Willem-Jan; Hotting, Eelco; Schmidt, Sebastian; Weidemann, Frederik

    The Rijnland Internet Election System (RIES) is a system designed for voting in public elections over the internet. A rather cursory scan of the source code to RIES showed a significant lack of security-awareness among the programmers which - among other things - appears to have left RIES vulnerable to near-trivial attacks. If it had not been for independent studies finding problems, RIES would have been used in the 2008 Water Board elections, possibly handling a million votes or more. While RIES was more extensively studied to find cryptographic shortcomings, our work shows that more down-to-earth secure design practices can be at least as important, and the aspects need to be examined much sooner than right before an election.

  12. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  13. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    Science.gov (United States)

    Rahimi, A.; Zhang, L.

    2012-12-01

    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further

  14. FREEDOM OF CONTRACT AND ITS LIMITATIONS IN THE ROMANIAN CIVIL CODE

    Directory of Open Access Journals (Sweden)

    EUGENIA VOICHECI

    2013-05-01

    Full Text Available This study aims to present the vision of the Romanian Civil Code about the freedom of contracting. The Romanian legislator has restated in terminis that the principle of contractual freedom is a fundament of the conventions but has also established its restraints: the law, the public order and the moral values. In order to attain the stated goal of this research, the effort was directed toward: presenting the freedom to contract as a principle of the private law, evoking the autonomy of the will theory as a fundament for the freedom to contract and toward systemically enunciating the competing theories and the decline of the actual autonomy of the will theory. The effort was also directed toward presenting the restraints of the freedom to contract, as they are stated in the Civic Code and the different categories of contracts which are the consequence of those restraints.

  15. A highly efficient pricing method for European-style options based on Shannon wavelets

    NARCIS (Netherlands)

    L. Ortiz Gracia (Luis); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractIn the search for robust, accurate and highly efficient financial option valuation techniques, we present here the SWIFT method (Shannon Wavelets Inverse Fourier Technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of sharp quantitative

  16. A Highly Efficient Shannon Wavelet Inverse Fourier Technique for Pricing European Options

    NARCIS (Netherlands)

    Ortiz-Gracia, Luis; Oosterlee, C.W.

    2016-01-01

    In the search for robust, accurate, and highly efficient financial option valuation techniques, we here present the SWIFT method (Shannon wavelets inverse Fourier technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of sharp quantitative error

  17. A highly efficient Shannon wavelet inverse Fourier technique for pricing European options

    NARCIS (Netherlands)

    L. Ortiz Gracia (Luis); C.W. Oosterlee (Cornelis)

    2016-01-01

    htmlabstractIn the search for robust, accurate, and highly efficient financial option valuation techniques, we here present the SWIFT method (Shannon wavelets inverse Fourier technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of

  18. Pricing early-exercise and discrete barrier options by Shannon wavelet expansions

    NARCIS (Netherlands)

    Maree, S. C.; Ortiz-Gracia, L.; Oosterlee, C. W.

    2017-01-01

    We present a pricing method based on Shannon wavelet expansions for early-exercise and discretely-monitored barrier options under exponential L,vy asset dynamics. Shannon wavelets are smooth, and thus approximate the densities that occur in finance well, resulting in exponential convergence.

  19. Application of gel dosimetry - A preliminary study on verification of uniformity of activity and length of source used in Beta-Cath system

    International Nuclear Information System (INIS)

    Subramaniam, S.; Rabi Raja Singh, I.; Visalatchi, S.; Paul Ravindran, B.

    2002-01-01

    Recently the intraluminal irradiation of coronary arteries following balloon angioplasty is found to reduce proliferation of smooth muscle cells and restenosis. Among the isotopes used for the intracoronary irradiation, 90 Sr/Y appears to be ideal (H I Almos et al, 1996). In 1984 Gore et al proposed that radiation induced changes in the well-established Fricke solution could be probed with Nuclear Magnetic Resonance (NMR) relaxation measurements rather than using conventional spectrophotometry measurements. This was a major step in the development of gel dosimetry and since then gel dosimetry has been one of the major advances in the dosimetry of complex radiation fields has been in the area of gel dosimetry. In this preliminary work on gel dosimetry we present the verification of uniformity of activity along the length of the source train and verification of the length of the source used in the Beta-Cath system used for intracoronary brachytherapy with ferrous gel dosimeter. The Beta-Cath system obtained from Novoste, Norcross, GA was used in this study. It consists of a source train of 16 90 Sr/Y sources each of length 2.5mm. The total length of the source train is 40mm. For preparation of the Ferrous-Gelatin Gel, the recipe provided by the London Regional Cancer Center, London Ontario, Canada was used. Stock solutions of 50mM H 2 SO 4 , 0.3 mM ferrous ammonium sulphate, 0.05mM Xylenol orange was first prepared. The gel was prepared by mixing 4% gelatin with distilled water while stirring in a water bath at 40-42 deg. C. Acid solution, Ferrous ammonium sulphate solution and Xylenol orange were added and stirred in the water bath for about an hour to allow aeration. The mixture was poured in to three 20ml syringes to form the gel and stored in the refrigerator at 5 deg. C. For irradiation with Beta-Cath, the gel was prepared in three cylindrical 20ml syringes. A nylon tube having the same dimension as that of the delivery catheter used in intra-coronary was placed

  20. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    Science.gov (United States)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  1. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  2. Jensen–Shannon information of the coherent system lifetime

    International Nuclear Information System (INIS)

    Asadi, Majid; Ebrahimi, Nader; Soofi, Ehsan S.; Zohrevand, Younes

    2016-01-01

    The signature of a coherent system with n components is an n-dimensional vector whose ith element is the probability that the ith failure of the components is fatal to the system. The signature depends only on the system design and provides useful tools for comparison of systems. We propose the Jensen–Shannon information (JS) criteria for comparison of systems, which is a scalar function of the signature and ranks systems based on their designs. The JS of a system is interpreted in terms of the remaining uncertainty about the system lifetime, the utility of dependence between the lifetime and the number of failures of components fatal to the system, and the Bayesian decision theory. The JS is non-negative and its minimum is attained by k-out-of-n systems, which are the least complex systems. This property offers JS as a measure of complexity of a system. Effects of expansion of a system on JS are studied. Application examples include comparisons of various sets of new systems and used but still working systems discussed in the literature. We also give an upper bound for the JS at the general level and compare it with a known upper bound. - Highlights: • Information criteria for comparing systems based on their structures are proposed. • The criteria only depend on the number of failures of its components. • The criteria rank systems based on the complexity of predicting their lifetimes. • The criteria apply to new system and system operating with failed components.

  3. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    Science.gov (United States)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  4. Calculation Of Fuel Burnup And Radionuclide Inventory In The Syrian Miniature Neutron Source Reactor Using The GETERA Code

    International Nuclear Information System (INIS)

    Khattab, K.; Dawahra, S.

    2011-01-01

    Calculations of the fuel burnup and radionuclide inventory in the Syrian Miniature Neutron Source Reactor (MNSR) after 10 years (the reactor core expected life) of the reactor operation time are presented in this paper using the GETERA code. The code is used to calculate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnup and plutonium produced in the reactor core, the concentrations and radionuclides of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core were calculated using the GETERA code as well. It is found that the GETERA code is better than the WIMSD4 code for the fuel burnup calculation in the MNSR reactor since it is newer and has a bigger library of isotopes and more accurate. (author)

  5. Soft-Decision-Data Reshuffle to Mitigate Pulsed Radio Frequency Interference Impact on Low-Density-Parity-Check Code Performance

    Science.gov (United States)

    Ni, Jianjun David

    2011-01-01

    This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.

  6. Design and Optimisation Strategies of Nonlinear Dynamics for Diffraction Limited Synchrotron Light Source

    CERN Document Server

    Bartolini, R.

    2016-01-01

    This paper introduces the most recent achievements in the control of nonlinear dynamics in electron synchrotron light sources, with special attention to diffraction limited storage rings. Guidelines for the design and optimization of the magnetic lattice are reviewed and discussed.

  7. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  8. Limits of Brazil's Forest Code as a means to end illegal deforestation.

    Science.gov (United States)

    Azevedo, Andrea A; Rajão, Raoni; Costa, Marcelo A; Stabile, Marcelo C C; Macedo, Marcia N; Dos Reis, Tiago N P; Alencar, Ane; Soares-Filho, Britaldo S; Pacheco, Rayane

    2017-07-18

    The 2012 Brazilian Forest Code governs the fate of forests and savannas on Brazil's 394 Mha of privately owned lands. The government claims that a new national land registry (SICAR), introduced under the revised law, could end illegal deforestation by greatly reducing the cost of monitoring, enforcement, and compliance. This study evaluates that potential, using data from state-level land registries (CAR) in Pará and Mato Grosso that were precursors of SICAR. Using geospatial analyses and stakeholder interviews, we quantify the impact of CAR on deforestation and forest restoration, investigating how landowners adjust their behaviors over time. Our results indicate rapid adoption of CAR, with registered properties covering a total of 57 Mha by 2013. This suggests that the financial incentives to join CAR currently exceed the costs. Registered properties initially showed lower deforestation rates than unregistered ones, but these differences varied by property size and diminished over time. Moreover, only 6% of registered producers reported taking steps to restore illegally cleared areas on their properties. Our results suggest that, from the landowner's perspective, full compliance with the Forest Code offers few economic benefits. Achieving zero illegal deforestation in this context would require the private sector to include full compliance as a market criterion, while state and federal governments develop SICAR as a de facto enforcement mechanism. These results are relevant to other tropical countries and underscore the importance of developing a policy mix that creates lasting incentives for sustainable land-use practices.

  9. Are gas exchange responses to resource limitation and defoliation linked to source:sink relationships?

    Science.gov (United States)

    Pinkard, E A; Eyles, A; O'Grady, A P

    2011-10-01

    Productivity of trees can be affected by limitations in resources such as water and nutrients, and herbivory. However, there is little understanding of their interactive effects on carbon uptake and growth. We hypothesized that: (1) in the absence of defoliation, photosynthetic rate and leaf respiration would be governed by limiting resource(s) and their impact on sink limitation; (2) photosynthetic responses to defoliation would be a consequence of changing source:sink relationships and increased availability of limiting resources; and (3) photosynthesis and leaf respiration would be adjusted in response to limiting resources and defoliation so that growth could be maintained. We tested these hypotheses by examining how leaf photosynthetic processes, respiration, carbohydrate concentrations and growth rates of Eucalyptus globulus were influenced by high or low water and nitrogen (N) availability, and/or defoliation. Photosynthesis of saplings grown with low water was primarily sink limited, whereas photosynthetic responses of saplings grown with low N were suggestive of source limitation. Defoliation resulted in source limitation. Net photosynthetic responses to defoliation were linked to the degree of resource availability, with the largest responses measured in treatments where saplings were ultimately source rather than sink limited. There was good evidence of acclimation to stress, enabling higher rates of C uptake than might otherwise have occurred. © 2011 Blackwell Publishing Ltd.

  10. Achievable Rates of Cognitive Radio Networks Using Multi-Layer Coding with Limited CSI

    KAUST Repository

    Sboui, Lokman; Rezki, Zouheir; Alouini, Mohamed-Slim

    2016-01-01

    In a Cognitive Radio (CR) framework, the channel state information (CSI) feedback to the secondary transmitter (SU Tx) can be limited or unavailable. Thus, the statistical model is adopted in order to determine the system performance using

  11. Confusion-limited extragalactic source survey at 4.755 GHz. I. Source list and areal distributions

    International Nuclear Information System (INIS)

    Ledden, J.E.; Broderick, J.J.; Condon, J.J.; Brown, R.L.

    1980-01-01

    A confusion-limited 4.755-GHz survey covering 0.00 956 sr between right ascensions 07/sup h/05/sup m/ and 18/sup h/ near declination +35 0 has been made with the NRAO 91-m telescope. The survey found 237 sources and is complete above 15 mJy. Source counts between 15 and 100 mJy were obtained directly. The P(D) distribution was used to determine the number counts between 0.5 and 13.2 mJy, to search for anisotropy in the density of faint extragalactic sources, and to set a 99%-confidence upper limit of 1.83 mK to the rms temperature fluctuation of the 2.7-K cosmic microwave background on angular scales smaller than 7.3 arcmin. The discrete-source density, normalized to the static Euclidean slope, falls off sufficiently rapidly below 100 mJy that no new population of faint flat-spectrum sources is required to explain the 4.755-GHz source counts

  12. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  13. Dual solutions for unsteady mixed convection flow of MHD micropolar fluid over a stretching/shrinking sheet with non-uniform heat source/sink

    Directory of Open Access Journals (Sweden)

    N. Sandeep

    2015-12-01

    Full Text Available The aim of the present study is to investigate the influence of non-uniform heat source/sink, mass transfer and chemical reaction on an unsteady mixed convection boundary layer flow of a magneto-micropolar fluid past a stretching/shrinking sheet in the presence of viscous dissipation and suction/injection. The governing equations of the flow, heat and mass transfer are transformed into system of nonlinear ordinary differential equations by using similarity transformation and then solved numerically using Shooting technique with Matlab Package. The influence of non-dimensional governing parameters on velocity, microrotation, temperature and concentration profiles are discussed and presented with the help of their graphical representations. Also, friction factor, heat and mass transfer rates have been computed and presented through tables. Under some special conditions, present results are compared with the existed results to check the accuracy and validity of the present study. An excellent agreement is observed with the existed results.

  14. Radiation effects on the mixed convection flow induced by an inclined stretching cylinder with non-uniform heat source/sink.

    Science.gov (United States)

    Hayat, Tasawar; Qayyum, Sajid; Alsaedi, Ahmed; Asghar, Saleem

    2017-01-01

    This study investigates the mixed convection flow of Jeffrey liquid by an impermeable inclined stretching cylinder. Thermal radiation and non-uniform heat source/sink are considered. The convective boundary conditions at surface are imposed. Nonlinear expressions of momentum, energy and concentration are transformed into dimensionless systems. Convergent homotopic solutions of the governing systems are worked out by employing homotopic procedure. Impact of physical variables on the velocity, temperature and concentration distributions are sketched and discussed. Numerical computations for skin friction coefficient, local Nusselt and Sherwood numbers are carried out. It is concluded that velocity field enhances for Deborah number while reverse situation is observed regarding ratio of relaxation to retardation times. Temperature and heat transfer rate are enhanced via larger thermal Biot number. Effect of Schmidt number on the concentration and local Sherwood number is quite reverse.

  15. 40 CFR Table 1 to Subpart Xxxx of... - Emission Limits for Tire Production Affected Sources

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Emission Limits for Tire Production Affected Sources 1 Table 1 to Subpart XXXX of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION.... 63, Subpt. XXXX, Table 1 Table 1 to Subpart XXXX of Part 63—Emission Limits for Tire Production...

  16. 40 CFR Table 3 to Subpart Xxxx of... - Emission Limits for Puncture Sealant Application Affected Sources

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Emission Limits for Puncture Sealant Application Affected Sources 3 Table 3 to Subpart XXXX of Part 63 Protection of Environment ENVIRONMENTAL... Manufacturing Pt. 63, Subpt. XXXX, Table 3 Table 3 to Subpart XXXX of Part 63—Emission Limits for Puncture...

  17. 40 CFR Table 2 to Subpart Xxxx of... - Emission Limits for Tire Cord Production Affected Sources

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Emission Limits for Tire Cord Production Affected Sources 2 Table 2 to Subpart XXXX of Part 63 Protection of Environment ENVIRONMENTAL... Manufacturing Pt. 63, Subpt. XXXX, Table 2 Table 2 to Subpart XXXX of Part 63—Emission Limits for Tire Cord...

  18. Limitations of Phased Array Beamforming in Open Rotor Noise Source Imaging

    Science.gov (United States)

    Horvath, Csaba; Envia, Edmane; Podboy, Gary G.

    2013-01-01

    Phased array beamforming results of the F31/A31 historical baseline counter-rotating open rotor blade set were investigated for measurement data taken on the NASA Counter-Rotating Open Rotor Propulsion Rig in the 9- by 15-Foot Low-Speed Wind Tunnel of NASA Glenn Research Center as well as data produced using the LINPROP open rotor tone noise code. The planar microphone array was positioned broadside and parallel to the axis of the open rotor, roughly 2.3 rotor diameters away. The results provide insight as to why the apparent noise sources of the blade passing frequency tones and interaction tones appear at their nominal Mach radii instead of at the actual noise sources, even if those locations are not on the blades. Contour maps corresponding to the sound fields produced by the radiating sound waves, taken from the simulations, are used to illustrate how the interaction patterns of circumferential spinning modes of rotating coherent noise sources interact with the phased array, often giving misleading results, as the apparent sources do not always show where the actual noise sources are located. This suggests that a more sophisticated source model would be required to accurately locate the sources of each tone. The results of this study also have implications with regard to the shielding of open rotor sources by airframe empennages.

  19. Commencement of the Couette flow in the Oldroyd liquid with heat sources and in the presence of a uniform transverse magnetic field

    International Nuclear Information System (INIS)

    Biswal, S.; Pattnaik, B.K.

    1996-01-01

    Commencement of the Couette flow in Oldroyd liquid has been studied in the presence of a uniform transverse magnetic field with heat sources/sinks. Constitutive equations of motion and energy have been formulated and solved with the aid of Galerkin technique. Expressions for velocity, temperature, skin frictions and rates of heat transfer are obtained. With Fortran language, the values of velocity, temperature, shear-stresses at the lower and upper plates and the rates of heat transfer at the plates have been evaluated after necessary computations. The results have been shown by graphs and tables for different values of various parameters like R, R c , P m , t, n, P r , E and S. Velocity and temperature distribution are shown by graphs while the values of shear-stresses and Nusselts numbers at the plates are entered in tables. It is observed that the flow is sensitive to the interactions of heat source/sink, elasticity of the fluid and the imposed magnetic field strength. The amount of heat energy propagated during this process of non-Newtonian flow varies appreciably with R, S and P r . The heat absorbing sink or the heat generating source influences the temperature field to a great extent. (author)

  20. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    Science.gov (United States)

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  1. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  2. HELIOS: An Open-source, GPU-accelerated Radiative Transfer Code for Self-consistent Exoplanetary Atmospheres

    Science.gov (United States)

    Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin

    2017-02-01

    We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).

  3. Proof of Concept Coded Aperture Miniature Mass Spectrometer Using a Cycloidal Sector Mass Analyzer, a Carbon Nanotube (CNT) Field Emission Electron Ionization Source, and an Array Detector

    Science.gov (United States)

    Amsden, Jason J.; Herr, Philip J.; Landry, David M. W.; Kim, William; Vyas, Raul; Parker, Charles B.; Kirley, Matthew P.; Keil, Adam D.; Gilchrist, Kristin H.; Radauscher, Erich J.; Hall, Stephen D.; Carlson, James B.; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T.; Russell, Zachary E.; Grego, Sonia; Edwards, Steven J.; Sperline, Roger P.; Denton, M. Bonner; Stoner, Brian R.; Gehm, Michael E.; Glass, Jeffrey T.

    2018-02-01

    Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified.

  4. ARC: An open-source library for calculating properties of alkali Rydberg atoms

    Science.gov (United States)

    Šibalić, N.; Pritchard, J. D.; Adams, C. S.; Weatherill, K. J.

    2017-11-01

    We present an object-oriented Python library for the computation of properties of highly-excited Rydberg states of alkali atoms. These include single-body effects such as dipole matrix elements, excited-state lifetimes (radiative and black-body limited) and Stark maps of atoms in external electric fields, as well as two-atom interaction potentials accounting for dipole and quadrupole coupling effects valid at both long and short range for arbitrary placement of the atomic dipoles. The package is cross-referenced to precise measurements of atomic energy levels and features extensive documentation to facilitate rapid upgrade or expansion by users. This library has direct application in the field of quantum information and quantum optics which exploit the strong Rydberg dipolar interactions for two-qubit gates, robust atom-light interfaces and simulating quantum many-body physics, as well as the field of metrology using Rydberg atoms as precise microwave electrometers. Program Files doi:http://dx.doi.org/10.17632/hm5n8w628c.1 Licensing provisions: BSD-3-Clause Programming language: Python 2.7 or 3.5, with C extension External Routines: NumPy [1], SciPy [1], Matplotlib [2] Nature of problem: Calculating atomic properties of alkali atoms including lifetimes, energies, Stark shifts and dipole-dipole interaction strengths using matrix elements evaluated from radial wavefunctions. Solution method: Numerical integration of radial Schrödinger equation to obtain atomic wavefunctions, which are then used to evaluate dipole matrix elements. Properties are calculated using second order perturbation theory or exact diagonalisation of the interaction Hamiltonian, yielding results valid even at large external fields or small interatomic separation. Restrictions: External electric field fixed to be parallel to quantisation axis. Supplementary material: Detailed documentation (.html), and Jupyter notebook with examples and benchmarking runs (.html and .ipynb). [1] T.E. Oliphant

  5. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Jr, D E; Pleasant, J C; Killough, G G

    1980-05-01

    The purpose of this report is to describe revisions in the SFACTOR computer code and to provide useful documentation for that program. The SFACTOR computer code has been developed to implement current methodologies for computing the average dose equivalent rate S(X reverse arrow Y) to specified target organs in man due to 1 ..mu..Ci of a given radionuclide uniformly distributed in designated source orrgans. The SFACTOR methodology is largely based upon that of Snyder, however, it has been expanded to include components of S from alpha and spontaneous fission decay, in addition to electron and photon radiations. With this methodology, S-factors can be computed for any radionuclide for which decay data are available. The tabulations in Appendix II provide a reference compilation of S-factors for several dosimetrically important radionuclides which are not available elsewhere in the literature. These S-factors are calculated for an adult with characteristics similar to those of the International Commission on Radiological Protection's Reference Man. Corrections to tabulations from Dunning are presented in Appendix III, based upon the methods described in Section 2.3. 10 refs.

  6. Beat Noise Cancellation in 2-D Optical Code-Division Multiple-Access Systems Using Optical Hard-Limiter Array

    Science.gov (United States)

    Dang, Ngoc T.; Pham, Anh T.; Cheng, Zixue

    We analyze the beat noise cancellation in two-dimensional optical code-division multiple-access (2-D OCDMA) systems using an optical hard-limiter (OHL) array. The Gaussian shape of optical pulse is assumed and the impact of pulse propagation is considered. We also take into account the receiver noise and multiple access interference (MAI) in the analysis. The numerical results show that, when OHL array is employed, the system performance is greatly improved compared with the cases without OHL array. Also, parameters needed for practical system design are comprehensively analyzed.

  7. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system.

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  8. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  9. Gaze strategies can reveal the impact of source code features on the cognitive load of novice programmers

    DEFF Research Database (Denmark)

    Wulff-Jensen, Andreas; Ruder, Kevin Vignola; Triantafyllou, Evangelia

    2018-01-01

    As shown by several studies, programmers’ readability of source code is influenced by its structural and the textual features. In order to assess the importance of these features, we conducted an eye-tracking experiment with programming students. To assess the readability and comprehensibility of...

  10. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  11. Effect of ceramic membrane channel geometry and uniform transmembrane pressure on limiting flux and serum protein removal during skim milk microfiltration.

    Science.gov (United States)

    Adams, Michael C; Hurt, Emily E; Barbano, David M

    2015-11-01

    Our objectives were to determine the effects of a ceramic microfiltration (MF) membrane's retentate flow channel geometry (round or diamond-shaped) and uniform transmembrane pressure (UTP) on limiting flux (LF) and serum protein (SP) removal during skim milk MF at a temperature of 50°C, a retentate protein concentration of 8.5%, and an average cross-flow velocity of 7 m·s(-1). Performance of membranes with round and diamond flow channels was compared in UTP mode. Performance of the membrane with round flow channels was compared with and without UTP. Using UTP with round flow channel MF membranes increased the LF by 5% when compared with not using UTP, but SP removal was not affected by the use of UTP. Using membranes with round channels instead of diamond-shaped channels in UTP mode increased the LF by 24%. This increase was associated with a 25% increase in Reynolds number and can be explained by lower shear at the vertices of the diamond-shaped channel's surface. The SP removal factor of the diamond channel system was higher than the SP removal factor of the round channel system below the LF. However, the diamond channel system passed more casein into the MF permeate than the round channel system. Because only one batch of each membrane was tested in our study, it was not possible to determine if the differences in protein rejection between channel geometries were due to the membrane design or random manufacturing variation. Despite the lower LF of the diamond channel system, the 47% increase in membrane module surface area of the diamond channel system produced a modular permeate removal rate that was at least 19% higher than the round channel system. Consequently, using diamond channel membranes instead of round channel membranes could reduce some of the costs associated with ceramic MF of skim milk if fewer membrane modules could be used to attain the required membrane area. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All

  12. Effect of heat transfer on unsteady MHD flow of blood in a permeable vessel in the presence of non-uniform heat source

    Directory of Open Access Journals (Sweden)

    A. Sinha

    2016-09-01

    Full Text Available This paper presents a theoretical analysis of blood flow and heat transfer in a permeable vessel in the presence of an external magnetic field. The unsteadiness in the coupled flow and temperature fields is considered to be caused due to the time-dependent stretching velocity and the surface temperature of the vessel. The non-uniform heat source/sink effect on blood flow and heat transfer is taken into account. This study is of potential value in the clinical treatment of cardiovascular disorders accompanied by accelerated circulation. The problem is treated mathematically by reducing it to a system of coupled nonlinear differential equations, which have been solved by using similarity transformation and boundary layer approximation. The resulting nonlinear coupled ordinary differential equations are solved numerically by using an implicit finite difference scheme. Computational results are obtained for the velocity, temperature, the skin-friction coefficient and the rate of heat transfer in the vessel. The estimated results are compared with another analytical study reported earlier in scientific literatures. The present study reveals that the heat transfer rate is enhanced as the value of the unsteadiness parameter increases, but it reduces as the space-dependence parameter for heat source/sink increases.

  13. Nonlinear pre-coding apparatus of multi-antenna system, has pre-coding unit that extents original constellation points of modulated symbols to several constellation points by using limited perturbation vector

    DEFF Research Database (Denmark)

    2008-01-01

    A Coding/Modulating units (200-1-200-N) outputs modulated symbols by modulating coding bit streams based on certain modulation scheme. The limited perturbation vector is calculated by using distribution of perturbation vectors. The original constellation points of modulated symbols are extended t...

  14. Delaunay Tetrahedralization of the Heart Based on Integration of Open Source Codes

    International Nuclear Information System (INIS)

    Pavarino, E; Neves, L A; Machado, J M; Momente, J C; Zafalon, G F D; Pinto, A R; Valêncio, C R; Godoy, M F de; Shiyou, Y; Nascimento, M Z do

    2014-01-01

    The Finite Element Method (FEM) is a way of numerical solution applied in different areas, as simulations used in studies to improve cardiac ablation procedures. For this purpose, the meshes should have the same size and histological features of the focused structures. Some methods and tools used to generate tetrahedral meshes are limited mainly by the use conditions. In this paper, the integration of Open Source Softwares is presented as an alternative to solid modeling and automatic mesh generation. To demonstrate its efficiency, the cardiac structures were considered as a first application context: atriums, ventricles, valves, arteries and pericardium. The proposed method is feasible to obtain refined meshes in an acceptable time and with the required quality for simulations using FEM

  15. A Novel Code System for Revealing Sources of Students' Difficulties with Stoichiometry

    Science.gov (United States)

    Gulacar, Ozcan; Overton, Tina L.; Bowman, Charles R.; Fynewever, Herb

    2013-01-01

    A coding scheme is presented and used to evaluate solutions of seventeen students working on twenty five stoichiometry problems in a think-aloud protocol. The stoichiometry problems are evaluated as a series of sub-problems (e.g., empirical formulas, mass percent, or balancing chemical equations), and the coding scheme was used to categorize each…

  16. VULCAN: An Open-source, Validated Chemical Kinetics Python Code for Exoplanetary Atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Shang-Min; Grosheintz, Luc; Kitzmann, Daniel; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Lyons, James R. [Arizona State University, School of Earth and Space Exploration, Bateman Physical Sciences, Tempe, AZ 85287-1404 (United States); Rimmer, Paul B., E-mail: shang-min.tsai@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch, E-mail: jimlyons@asu.edu [University of St. Andrews, School of Physics and Astronomy, St. Andrews, KY16 9SS (United Kingdom)

    2017-02-01

    We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name VULCAN. It is constructed for gaseous chemistry from 500 to 2500 K, using a reduced C–H–O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate VULCAN by reproducing chemical equilibrium and by comparing its output versus the disequilibrium-chemistry calculations of Moses et al. and Rimmer and Helling. It reproduces the models of HD 189733b and HD 209458b by Moses et al., which employ a network with nearly 1600 reactions. We also use VULCAN to examine the theoretical trends produced when the temperature–pressure profile and carbon-to-oxygen ratio are varied. Assisted by a sensitivity test designed to identify the key reactions responsible for producing a specific molecule, we revisit the quenching approximation and find that it is accurate for methane but breaks down for acetylene, because the disequilibrium abundance of acetylene is not directly determined by transport-induced quenching, but is rather indirectly controlled by the disequilibrium abundance of methane. Therefore we suggest that the quenching approximation should be used with caution and must always be checked against a chemical kinetics calculation. A one-dimensional model atmosphere with 100 layers, computed using VULCAN, typically takes several minutes to complete. VULCAN is part of the Exoclimes Simulation Platform (ESP; exoclime.net) and publicly available at https://github.com/exoclime/VULCAN.

  17. Monitoring of zebra mussels in the Shannon-Boyle navigation, other

    OpenAIRE

    Minchin, D.; Lucy, F.; Sullivan, M.

    2002-01-01

    The zebra mussel (Dreissena polymorpha) population has been closely monitored in Ireland following its discovery in 1997. The species has spread from lower Lough Derg, where it was first introduced, to most of the navigable areas of the Shannon and other interconnected navigable waters. This study took place in the summers of 2000 and 2001 and investigated the relative abundance and biomass of zebra mussels found in the main navigations of the Shannon and elsewhere in rivers, canals and lakes...

  18. Limits on the space density of gamma-ray burst sources

    International Nuclear Information System (INIS)

    Epstein, R.I.

    1985-01-01

    Gamma-ray burst spectra which extend to several MeV without significant steepening indicate that there is negligible degradation due to two-photon pair production. The inferred low rate of photon-photon reactions is used to give upper limits to the distances to the sources and to the intensity of the radiation from the sources. These limits are calculated under the assumptions that the bursters are neutron stars which emit uncollimated gamma rays. The principal results are that the space density of the gamma-ray burst sources exceeds approx.10 -6 pc -3 if the entire surface of the neutron star radiates and exceeds approx.10 -3 pc -3 if only a small cap or thin strip in the stellar surface radiates. In the former case the density of gamma-ray bursters is approx.1% of the inferred density of extinct pulsars, and in the latter case the mean mass density of burster sources is a few percent of the density of unidentified dark matter in the solar neighborhood. In both cases the X-ray intensity of the sources is far below the Rayleigh-Jeans limit, and the total flux is at most comparable to the Eddington limit. This implies that low-energy self-absorption near 10 keV is entirely negligible and that radiation-driven explosions are just barely possible

  19. Code of Conduct on the Safety and Security of Radioactive Sources and the Supplementary Guidance on the Import and Export of Radioactive Sources

    International Nuclear Information System (INIS)

    2005-01-01

    In operative paragraph 4 of its resolution GC(47)/RES/7.B, the General Conference, having welcomed the approval by the Board of Governors of the revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources (GC(47)/9), and while recognizing that the Code is not a legally binding instrument, urged each State to write to the Director General that it fully supports and endorses the IAEA's efforts to enhance the safety and security of radioactive sources and is working toward following the guidance contained in the IAEA Code of Conduct. In operative paragraph 5, the Director General was requested to compile, maintain and publish a list of States that have made such a political commitment. The General Conference, in operative paragraph 6, recognized that this procedure 'is an exceptional one, having no legal force and only intended for information, and therefore does not constitute a precedent applicable to other Codes of Conduct of the Agency or of other bodies belonging to the United Nations system'. In operative paragraph 7 of resolution GC(48)/RES/10.D, the General Conference welcomed the fact that more than 60 States had made political commitments with respect to the Code in line with resolution GC(47)/RES/7.B and encouraged other States to do so. In operative paragraph 8 of resolution GC(48)/RES/10.D, the General Conference further welcomed the approval by the Board of Governors of the Supplementary Guidance on the Import and Export of Radioactive Sources (GC(48)/13), endorsed this Guidance while recognizing that it is not legally binding, noted that more than 30 countries had made clear their intention to work towards effective import and export controls by 31 December 2005, and encouraged States to act in accordance with the Guidance on a harmonized basis and to notify the Director General of their intention to do so as supplementary information to the Code of Conduct, recalling operative paragraph 6 of resolution GC(47)/RES/7.B. 4. The

  20. Limit of detection of a fiber optics gyroscope using a super luminescent radiation source

    International Nuclear Information System (INIS)

    Sandoval R, G.E.; Nikolaev, V.A.

    2003-01-01

    The main objective of this work is to establish the dependence of characteristics of the fiber optics gyroscope (FOG) with respect to the parameters of the super luminescent emission source based on doped optical fiber with rare earth elements (Super luminescent Fiber Source, SFS), argument the pumping rate election of the SFS to obtain characteristics limits of the FOG sensibility. By using this type of emission source in the FOG is recommend to use the rate when the direction of the pumping signal coincide with the super luminescent signal. The most results are the proposition and argumentation of the SFS election as emission source to be use in the FOG of the phase type. Such a decision allow to increase the characteristics of the FOG sensibility in comparison with the use of luminescent source of semiconductors emission which are extensively used in the present time. The use of emission source of the SFS type allow to come closer to the threshold of the obtained sensibility limit (detection limit) which is determined with the shot noise. (Author)

  1. Limit of detection of a fiber optics gyroscope using a super luminescent radiation source

    CERN Document Server

    Sandoval, G E

    2003-01-01

    The main objective of this work is to establish the dependence of characteristics of the fiber optics gyroscope (FOG) with respect to the parameters of the super luminescent emission source based on doped optical fiber with rare earth elements (Super luminescent Fiber Source, SFS), argument the pumping rate election of the SFS to obtain characteristics limits of the FOG sensibility. By using this type of emission source in the FOG is recommend to use the rate when the direction of the pumping signal coincide with the super luminescent signal. The most results are the proposition and argumentation of the SFS election as emission source to be use in the FOG of the phase type. Such a decision allow to increase the characteristics of the FOG sensibility in comparison with the use of luminescent source of semiconductors emission which are extensively used in the present time. The use of emission source of the SFS type allow to come closer to the threshold of the obtained sensibility limit (detection limit) which i...

  2. Far-Field Superresolution of Thermal Electromagnetic Sources at the Quantum Limit.

    Science.gov (United States)

    Nair, Ranjith; Tsang, Mankei

    2016-11-04

    We obtain the ultimate quantum limit for estimating the transverse separation of two thermal point sources using a given imaging system with limited spatial bandwidth. We show via the quantum Cramér-Rao bound that, contrary to the Rayleigh limit in conventional direct imaging, quantum mechanics does not mandate any loss of precision in estimating even deep sub-Rayleigh separations. We propose two coherent measurement techniques, easily implementable using current linear-optics technology, that approach the quantum limit over an arbitrarily large range of separations. Our bound is valid for arbitrary source strengths, all regions of the electromagnetic spectrum, and for any imaging system with an inversion-symmetric point-spread function. The measurement schemes can be applied to microscopy, optical sensing, and astrometry at all wavelengths.

  3. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    International Nuclear Information System (INIS)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C

  4. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  5. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    CERN Document Server

    Lu Cun Heng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving cou...

  6. 40 CFR 63.5985 - What are my alternatives for meeting the emission limits for tire production affected sources?

    Science.gov (United States)

    2010-07-01

    ... the emission limits for tire production affected sources? 63.5985 Section 63.5985 Protection of... Pollutants: Rubber Tire Manufacturing Emission Limits for Tire Production Affected Sources § 63.5985 What are my alternatives for meeting the emission limits for tire production affected sources? You must use...

  7. Sand Fly Fauna (Diptera, Pcychodidae, Phlebotominae) in Different Leishmaniasis-Endemic Areas of Ecuador, Surveyed Using a Newly Named Mini-Shannon Trap

    Science.gov (United States)

    Hashiguchi, Kazue; Velez N., Lenin; Kato, Hirotomo; Criollo F., Hipatia; Romero A., Daniel; Gomez L., Eduardo; Martini R., Luiggi; Zambrano C., Flavio; Calvopina H., Manuel; Caceres G., Abraham; Hashiguchi, Yoshihisa

    2014-01-01

    To study the sand fly fauna, surveys were performed at four different leishmaniasis-endemic sites in Ecuador from February 2013 to April 2014. A modified and simplified version of the conventional Shannon trap was named “mini-Shannon trap” and put to multiple uses at the different study sites in limited, forested and narrow spaces. The mini-Shannon, CDC light trap and protected human landing method were employed for sand fly collection. The species identification of sand flies was performed mainly based on the morphology of spermathecae and cibarium, after dissection of fresh samples. In this study, therefore, only female samples were used for analysis. A total of 1,480 female sand flies belonging to 25 Lutzomyia species were collected. The number of female sand flies collected was 417 (28.2%) using the mini-Shannon trap, 259 (17.5%) using the CDC light trap and 804 (54.3%) by human landing. The total number of sand flies per trap collected by the different methods was markedly affected by the study site, probably because of the various composition of species at each locality. Furthermore, as an additional study, the attraction of sand flies to mini-Shannon traps powered with LED white-light and LED black-light was investigated preliminarily, together with the CDC light trap and human landing. As a result, a total of 426 sand flies of nine Lutzomyia species, including seven man-biting and two non-biting species, were collected during three capture trials in May and June 2014 in an area endemic for leishmaniasis (La Ventura). The black-light proved relatively superior to the white-light with regard to capture numbers, but no significant statistical difference was observed between the two traps. PMID:25589880

  8. SU-D-210-03: Limited-View Multi-Source Quantitative Photoacoustic Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng, J; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China)

    2015-06-15

    Purpose: This work is to investigate a novel limited-view multi-source acquisition scheme for the direct and simultaneous reconstruction of optical coefficients in quantitative photoacoustic tomography (QPAT), which has potentially improved signal-to-noise ratio and reduced data acquisition time. Methods: Conventional QPAT is often considered in two steps: first to reconstruct the initial acoustic pressure from the full-view ultrasonic data after each optical illumination, and then to quantitatively reconstruct optical coefficients (e.g., absorption and scattering coefficients) from the initial acoustic pressure, using multi-source or multi-wavelength scheme.Based on a novel limited-view multi-source scheme here, We have to consider the direct reconstruction of optical coefficients from the ultrasonic data, since the initial acoustic pressure can no longer be reconstructed as an intermediate variable due to the incomplete acoustic data in the proposed limited-view scheme. In this work, based on a coupled photo-acoustic forward model combining diffusion approximation and wave equation, we develop a limited-memory Quasi-Newton method (LBFGS) for image reconstruction that utilizes the adjoint forward problem for fast computation of gradients. Furthermore, the tensor framelet sparsity is utilized to improve the image reconstruction which is solved by Alternative Direction Method of Multipliers (ADMM). Results: The simulation was performed on a modified Shepp-Logan phantom to validate the feasibility of the proposed limited-view scheme and its corresponding image reconstruction algorithms. Conclusion: A limited-view multi-source QPAT scheme is proposed, i.e., the partial-view acoustic data acquisition accompanying each optical illumination, and then the simultaneous rotations of both optical sources and ultrasonic detectors for next optical illumination. Moreover, LBFGS and ADMM algorithms are developed for the direct reconstruction of optical coefficients from the

  9. A new qualitative acoustic emission parameter based on Shannon's entropy for damage monitoring

    Science.gov (United States)

    Chai, Mengyu; Zhang, Zaoxiao; Duan, Quan

    2018-02-01

    An important objective of acoustic emission (AE) non-destructive monitoring is to accurately identify approaching critical damage and to avoid premature failure by means of the evolutions of AE parameters. One major drawback of most parameters such as count and rise time is that they are strongly dependent on the threshold and other settings employed in AE data acquisition system. This may hinder the correct reflection of original waveform generated from AE sources and consequently bring difficulty for the accurate identification of the critical damage and early failure. In this investigation, a new qualitative AE parameter based on Shannon's entropy, i.e. AE entropy is proposed for damage monitoring. Since it derives from the uncertainty of amplitude distribution of each AE waveform, it is independent of the threshold and other time-driven parameters and can characterize the original micro-structural deformations. Fatigue crack growth test on CrMoV steel and three point bending test on a ductile material are conducted to validate the feasibility and effectiveness of the proposed parameter. The results show that the new parameter, compared to AE amplitude, is more effective in discriminating the different damage stages and identifying the critical damage.

  10. The development of criteria for limiting the non-uniform irradiation of skin: the rationale for a study of non-stochastic effects

    International Nuclear Information System (INIS)

    Wells, J.; Charles, M.W.

    1979-06-01

    Recent recommendations of the ICRP (1977) provide little guidance for the treatment of non-uniform skin exposures such as those which may occur as the result of contamination with radioactive particulates. This lack of guidance is due to a paucity of data regarding biological effects in this area. A rationale is presented for the study of the early (non-stochastic) effects of non-uniform skin irradiation. As a basis for the presentation of this rationale a framework is provided by a resume of basic biology of the skin and a review of previous experimental work in this field. Animal experiments, which are being carried out in collaboration with specialist university groups, are described both in terms of broad concept and experimental detail. The aim is to provide biological data which can provide guidance in radiological protection situations. (author)

  11. Evaluation of the methodology for dose calculation in microdosimetry with electrons sources using the MCNP5 Code

    International Nuclear Information System (INIS)

    Cintra, Felipe Belonsi de

    2010-01-01

    This study made a comparison between some of the major transport codes that employ the Monte Carlo stochastic approach in dosimetric calculations in nuclear medicine. We analyzed in detail the various physical and numerical models used by MCNP5 code in relation with codes like EGS and Penelope. The identification of its potential and limitations for solving microdosimetry problems were highlighted. The condensed history methodology used by MCNP resulted in lower values for energy deposition calculation. This showed a known feature of the condensed stories: its underestimates both the number of collisions along the trajectory of the electron and the number of secondary particles created. The use of transport codes like MCNP and Penelope for micrometer scales received special attention in this work. Class I and class II codes were studied and their main resources were exploited in order to transport electrons, which have particular importance in dosimetry. It is expected that the evaluation of available methodologies mentioned here contribute to a better understanding of the behavior of these codes, especially for this class of problems, common in microdosimetry. (author)

  12. Universal restrictions to the conversion of heat into work derived from the analysis of the Nernst theorem as a uniform limit

    International Nuclear Information System (INIS)

    Martin-Olalla, Jose Maria; Luna, Alfredo Rey de

    2003-01-01

    We revisit the relationship between the Nernst theorem and the Kelvin-Planck statement of the second law. We propose that the exchange of entropy uniformly vanishes as the temperature goes to zero. The analysis of this assumption shows that is equivalent to the fact that the compensation of a Carnot engine scales with the absorbed heat so that the Nernst theorem should be embedded in the statement of the second law

  13. ''Anomalous'' air showers from point sources: Mass limits and light curves

    International Nuclear Information System (INIS)

    Domokos, G.; Elliott, B.; Kovesi-Domokos, S.

    1993-01-01

    We describe a method to obtain upper limits on the mass of the primaries of air showers associated with point sources. One also obtains the UHE pulse shape of a pulsar if its period is observed in the signal. As an example, we analyze the data obtained during a recent burst of Hercules-X1

  14. A Novel Approach of Using Ground CNTs as the Carbon Source to Fabricate Uniformly Distributed Nano-Sized TiCx/2009Al Composites.

    Science.gov (United States)

    Wang, Lei; Qiu, Feng; Ouyang, Licheng; Wang, Huiyuan; Zha, Min; Shu, Shili; Zhao, Qinglong; Jiang, Qichuan

    2015-12-17

    Nano-sized TiC x /2009Al composites (with 5, 7, and 9 vol% TiC x ) were fabricated via the combustion synthesis of the 2009Al-Ti-CNTs system combined with vacuum hot pressing followed by hot extrusion. In the present study, CNTs were used as the carbon source to synthesize nano-sized TiC x particles. An attempt was made to correlate the effect of ground CNTs by milling and the distribution of synthesized nano-sized TiC x particles in 2009Al as well as the tensile properties of nano-sized TiC x /2009Al composites. Microstructure analysis showed that when ground CNTs were used, the synthesized nano-sized TiC x particles dispersed more uniformly in the 2009Al matrix. Moreover, when 2 h-milled CNTs were used, the 5, 7, and 9 vol% nano-sized TiC x /2009Al composites had the highest tensile properties, especially, the 9 vol% nano-sized TiC x /2009Al composites. The results offered a new approach to improve the distribution of in situ nano-sized TiC x particles and tensile properties of composites.

  15. Analysis of source term aspects in the experiment Phebus FPT1 with the MELCOR and CFX codes

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Fuertes, F. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)]. E-mail: francisco.martinfuertes@upm.es; Barbero, R. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Martin-Valdepenas, J.M. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Jimenez, M.A. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2007-03-15

    Several aspects related to the source term in the Phebus FPT1 experiment have been analyzed with the help of MELCOR 1.8.5 and CFX 5.7 codes. Integral aspects covering circuit thermalhydraulics, fission product and structural material release, vapours and aerosol retention in the circuit and containment were studied with MELCOR, and the strong and weak points after comparison to experimental results are stated. Then, sensitivity calculations dealing with chemical speciation upon release, vertical line aerosol deposition and steam generator aerosol deposition were performed. Finally, detailed calculations concerning aerosol deposition in the steam generator tube are presented. They were obtained by means of an in-house code application, named COCOA, as well as with CFX computational fluid dynamics code, in which several models for aerosol deposition were implemented and tested, while the models themselves are discussed.

  16. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  17. The use of CFD code for numerical simulation study on the air/water countercurrent flow limitation in nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Morghi, Youssef; Mesquita, Amir Zacarias; Santos, Andre Augusto Campagnole dos; Vasconcelos, Victor, E-mail: ymo@cdtn.br, E-mail: amir@cdtn.br, E-mail: aacs@cdtn.br, E-mail: vitors@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2015-07-01

    For the experimental study on the air/water countercurrent flow limitation in Nuclear Reactors, were built at CDTN an acrylic test sections with the same geometric shape of 'hot leg' of a Pressurized Water Reactor (PWR). The hydraulic circuit is designed to be used with air and water at pressures near to atmospheric and ambient temperature. Due to the complexity of the CCFL experimental, the numerical simulation has been used. The aim of the numerical simulations is the validation of experimental data. It is a global trend, the use of computational fluid dynamics (CFD) modeling and prediction of physical phenomena related to heat transfer in nuclear reactors. The most used CFD codes are: FLUENT®, STAR- CD®, Open Foam® and CFX®. In CFD, closure models are required that must be validated, especially if they are to be applied to nuclear reactor safety. The Thermal- Hydraulics Laboratory of CDTN offers computing infrastructure and license to use commercial code CFX®. This article describes a review about CCFL and the use of CFD for numerical simulation of this phenomenal for Nuclear Rector. (author)

  18. The use of CFD code for numerical simulation study on the air/water countercurrent flow limitation in nuclear reactors

    International Nuclear Information System (INIS)

    Morghi, Youssef; Mesquita, Amir Zacarias; Santos, Andre Augusto Campagnole dos; Vasconcelos, Victor

    2015-01-01

    For the experimental study on the air/water countercurrent flow limitation in Nuclear Reactors, were built at CDTN an acrylic test sections with the same geometric shape of 'hot leg' of a Pressurized Water Reactor (PWR). The hydraulic circuit is designed to be used with air and water at pressures near to atmospheric and ambient temperature. Due to the complexity of the CCFL experimental, the numerical simulation has been used. The aim of the numerical simulations is the validation of experimental data. It is a global trend, the use of computational fluid dynamics (CFD) modeling and prediction of physical phenomena related to heat transfer in nuclear reactors. The most used CFD codes are: FLUENT®, STAR- CD®, Open Foam® and CFX®. In CFD, closure models are required that must be validated, especially if they are to be applied to nuclear reactor safety. The Thermal- Hydraulics Laboratory of CDTN offers computing infrastructure and license to use commercial code CFX®. This article describes a review about CCFL and the use of CFD for numerical simulation of this phenomenal for Nuclear Rector. (author)

  19. Spatial resolution limits for the localization of noise sources using direct sound mapping

    DEFF Research Database (Denmark)

    Comesana, D. Fernandez; Holland, K. R.; Fernandez Grande, Efren

    2016-01-01

    the relationship between spatial resolution, noise level and geometry. The proposed expressions are validated via simulations and experiments. It is shown that particle velocity mapping yields better results for identifying closely spaced sound sources than sound pressure or sound intensity, especially...... extensively been used for many years to locate sound sources. However, it is not yet well defined when two sources should be regarded as resolved by means of direct sound mapping. This paper derives the limits of the direct representation of sound pressure, particle velocity and sound intensity by exploring......One of the main challenges arising from noise and vibration problems is how to identify the areas of a device, machine or structure that produce significant acoustic excitation, i.e. the localization of main noise sources. The direct visualization of sound, in particular sound intensity, has...

  20. Limits to source counts and cosmic microwave background fluctuations at 10.6 GHz

    International Nuclear Information System (INIS)

    Seielstad, G.A.; Masson, C.R.; Berge, G.L.

    1981-01-01

    We have determined the distribution of deflections due to sky temperature fluctuations at 10.6 GHz. If all the deflections are due to fine structure in the cosmic microwave background, we limit these fluctuations to ΔT/T -4 on an angular scale of 11 arcmin. If, on the other hand, all the deflections are due to confusion among discrete radio sources, the areal density of these sources is calculated for various slopes of the differential source count relationship and for various cutoff flux densities. If, for example, the slope is 2.1 and the cutoff is 10 mJy, we find (0.25--3.3) 10 6 sources sr -1 Jy -1

  1. Synthesis of Directional Sources Using Wave Field Synthesis, Possibilities, and Limitations

    Directory of Open Access Journals (Sweden)

    Corteel E

    2007-01-01

    Full Text Available The synthesis of directional sources using wave field synthesis is described. The proposed formulation relies on an ensemble of elementary directivity functions based on a subset of spherical harmonics. These can be combined to create and manipulate directivity characteristics of the synthesized virtual sources. The WFS formulation introduces artifacts in the synthesized sound field for both ideal and real loudspeakers. These artifacts can be partly compensated for using dedicated equalization techniques. A multichannel equalization technique is shown to provide accurate results thus enabling for the manipulation of directional sources with limited reconstruction artifacts. Applications of directional sources to the control of the direct sound field and the interaction with the listening room are discussed.

  2. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  3. Recycling source terms for edge plasma fluid models and impact on convergence behaviour of the BRAAMS 'B2' code

    International Nuclear Information System (INIS)

    Maddison, G.P.; Reiter, D.

    1994-02-01

    Predictive simulations of tokamak edge plasmas require the most authentic description of neutral particle recycling sources, not merely the most expedient numerically. Employing a prototypical ITER divertor arrangement under conditions of high recycling, trial calculations with the 'B2' steady-state edge plasma transport code, plus varying approximations or recycling, reveal marked sensitivity of both results and its convergence behaviour to details of sources incorporated. Comprehensive EIRENE Monte Carlo resolution of recycling is implemented by full and so-called 'shot' intermediate cycles between the plasma fluid and statistical neutral particle models. As generally for coupled differencing and stochastic procedures, though, overall convergence properties become more difficult to assess. A pragmatic criterion for the 'B2'/EIRENE code system is proposed to determine its success, proceeding from a stricter condition previously identified for one particular analytic approximation of recycling in 'B2'. Certain procedures are also inferred potentially to improve their convergence further. (orig.)

  4. Transmission from theory to practice: Experiences using open-source code development and a virtual short course to increase the adoption of new theoretical approaches

    Science.gov (United States)

    Harman, C. J.

    2015-12-01

    Even amongst the academic community, new theoretical tools can remain underutilized due to the investment of time and resources required to understand and implement them. This surely limits the frequency that new theory is rigorously tested against data by scientists outside the group that developed it, and limits the impact that new tools could have on the advancement of science. Reducing the barriers to adoption through online education and open-source code can bridge the gap between theory and data, forging new collaborations, and advancing science. A pilot venture aimed at increasing the adoption of a new theory of time-variable transit time distributions was begun in July 2015 as a collaboration between Johns Hopkins University and The Consortium of Universities for the Advancement of Hydrologic Science (CUAHSI). There were four main components to the venture: a public online seminar covering the theory, an open source code repository, a virtual short course designed to help participants apply the theory to their data, and an online forum to maintain discussion and build a community of users. 18 participants were selected for the non-public components based on their responses in an application, and were asked to fill out a course evaluation at the end of the short course, and again several months later. These evaluations, along with participation in the forum and on-going contact with the organizer suggest strengths and weaknesses in this combination of components to assist participants in adopting new tools.

  5. Introduction to quantum groups

    International Nuclear Information System (INIS)

    Sudbery, A.

    1996-01-01

    These pedagogical lectures contain some motivation for the study of quantum groups; a definition of ''quasi triangular Hopf algebra'' with explanations of all the concepts required to build it up; descriptions of quantised universal enveloping algebras and the quantum double; and an account of quantised function algebras and the action of quantum groups on quantum spaces. (author)

  6. African Journal of Science and Technology (AJST) SUPERVISED ...

    African Journals Online (AJOL)

    NORBERT OPIYO AKECH

    Keywords: color image, kohonen, LVQ, classification, K-means. INTRODUCTION. In this paper the problem of color image quantisation is discussed. Color quantisation consists of two steps: tem- plate design, in which a reduced number of template col- ors (typically 8-256) is specified, and pixel mapping in which each color ...

  7. Quantum X waves with orbital angular momentum in nonlinear dispersive media

    Science.gov (United States)

    Ornigotti, Marco; Conti, Claudio; Szameit, Alexander

    2018-06-01

    We present a complete and consistent quantum theory of generalised X waves with orbital angular momentum in dispersive media. We show that the resulting quantised light pulses are affected by neither dispersion nor diffraction and are therefore resilient against external perturbations. The nonlinear interaction of quantised X waves in quadratic and Kerr nonlinear media is also presented and studied in detail.

  8. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  9. Beamspace fast fully adaptive brain source localization for limited data sequences

    International Nuclear Information System (INIS)

    Ravan, Maryam

    2017-01-01

    In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second order statistics often fail when the observations are taken over a short time interval, especially when the number of electrodes is large. To address this issue, in previous study, we developed a multistage adaptive processing called fast fully adaptive (FFA) approach that can significantly reduce the required sample support while still processing all available degrees of freedom (DOFs). This approach processes the observed data in stages through a decimation procedure. In this study, we introduce a new form of FFA approach called beamspace FFA. We first divide the brain into smaller regions and transform the measured data from the source space to the beamspace in each region. The FFA approach is then applied to the beamspaced data of each region. The goal of this modification is to benefit the correlation sensitivity reduction between sources in different brain regions. To demonstrate the performance of the beamspace FFA approach in the limited data scenario, simulation results with multiple deep and cortical sources as well as experimental results are compared with regular FFA and widely used FINE approaches. Both simulation and experimental results demonstrate that the beamspace FFA method can localize different types of multiple correlated brain sources in low signal to noise ratios more accurately with limited data. (paper)

  10. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  11. Study of the source term of radiation of the CDTN GE-PET trace 8 cyclotron with the MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Benavente C, J. A.; Lacerda, M. A. S.; Fonseca, T. C. F.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R., E-mail: jhonnybenavente@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    Full text: The knowledge of the neutron spectra in a PET cyclotron is important for the optimization of radiation protection of the workers and individuals of the public. The main objective of this work is to study the source term of radiation of the GE-PET trace 8 cyclotron of the Development Center of Nuclear Technology (CDTN/CNEN) using computer simulation by the Monte Carlo method. The MCNPX version 2.7 code was used to calculate the flux of neutrons produced from the interaction of the primary proton beam with the target body and other cyclotron components, during 18F production. The estimate of the source term and the corresponding radiation field was performed from the bombardment of a H{sub 2}{sup 18}O target with protons of 75 μA current and 16.5 MeV of energy. The values of the simulated fluxes were compared with those reported by the accelerator manufacturer (GE Health care Company). Results showed that the fluxes estimated with the MCNPX codes were about 70% lower than the reported by the manufacturer. The mean energies of the neutrons were also different of that reported by GE Health Care. It is recommended to investigate other cross sections data and the use of physical models of the code itself for a complete characterization of the source term of radiation. (Author)

  12. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  13. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  14. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources; SCRIC: un code pour calculer l'absorption et l'emission detaillees de plasmas hors equilibre, inhomogenes et etendus; application aux sources EUV a base de xenon

    Energy Technology Data Exchange (ETDEWEB)

    Gaufridy de Dortan, F. de

    2006-07-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  15. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  16. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  17. Mass Transfer Limited Enhanced Bioremediation at Dnapl Source Zones: a Numerical Study

    Science.gov (United States)

    Kokkinaki, A.; Sleep, B. E.

    2011-12-01

    The success of enhanced bioremediation of dense non-aqueous phase liquids (DNAPLs) relies on accelerating contaminant mass transfer from the organic to the aqueous phase, thus enhancing the depletion of DNAPL source zones compared to natural dissolution. This is achieved by promoting biological activity that reduces the contaminant's aqueous phase concentration. Although laboratory studies have demonstrated that high reaction rates are attainable by specialized microbial cultures in DNAPL source zones, field applications of the technology report lower reaction rates and prolonged remediation times. One possible explanation for this phenomenon is that the reaction rates are limited by the rate at which the contaminant partitions from the DNAPL to the aqueous phase. In such cases, slow mass transfer to the aqueous phase reduces the bioavailability of the contaminant and consequently decreases the potential source zone depletion enhancement. In this work, the effect of rate limited mass transfer on bio-enhanced dissolution of DNAPL chlorinated ethenes is investigated through a numerical study. A multi-phase, multi-component groundwater transport model is employed to simulate DNAPL mass depletion for a range of source zone scenarios. Rate limited mass transfer is modeled by a linear driving force model, employing a thermodynamic approach for the calculation of the DNAPL - water interfacial area. Metabolic reductive dechlorination is modeled by Monod kinetics, considering microbial growth and self-inhibition. The model was utilized to identify conditions in which mass transfer, rather than reaction, is the limiting process, as indicated by the bioavailability number. In such cases, reaction is slower than expected, and further increase in the reaction rate does not enhance mass depletion. Mass transfer rate limitations were shown to affect both dechlorination and microbial growth kinetics. The complex dynamics between mass transfer, DNAPL transport and distribution, and

  18. Large-eddy simulation of convective boundary layer generated by highly heated source with open source code, OpenFOAM

    International Nuclear Information System (INIS)

    Hattori, Yasuo; Suto, Hitoshi; Eguchi, Yuzuru; Sano, Tadashi; Shirai, Koji; Ishihara, Shuji

    2011-01-01

    Spatial- and temporal-characteristics of turbulence structures in the close vicinity of a heat source, which is a horizontal upward-facing round plate heated at high temperature, are examined by using well resolved large-eddy simulations. The verification is carried out through the comparison with experiments: the predicted statistics, including the PDF distribution of temperature fluctuations, agree well with measurements, indicating that the present simulations have a capability to appropriately reproduce turbulence structures near the heat source. The reproduced three-dimensional thermal- and fluid-fields in the close vicinity of the heat source reveals developing processes of coherence structures along the surface: the stationary- and streaky-flow patterns appear near the edge, and such patterns randomly shift to cell-like patterns with incursion into the center region, resulting in thermal-plume meandering. Both the patterns have very thin structures, but the depth of streaky structure is considerably small compared with that of cell-like patterns; this discrepancy causes the layered structures. The structure is the source of peculiar turbulence characteristics, the prediction of which is quite difficult with RANS-type turbulence models. The understanding such structures obtained in present study must be helpful to improve the turbulence model used in nuclear engineering. (author)

  19. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  20. THE LOWER LIMIT OF THE RIGHT TO LIFE OF THE PERSON IN THE LIGHT OF THE NEW CRIMINAL CODE

    Directory of Open Access Journals (Sweden)

    MIHAELA (ROTARU MOISE

    2012-05-01

    Full Text Available The right to life is one of the fundamental human rights both nationally and internationally, being provided and guaranteed by a series of legal acts. In the Romanian law, the right to life of the person as guaranteed by the article 22 paragraph (1 of the Constitution is also protected by criminal law. In judicial practice in this matter there was an important and interesting issue regarding the limits of the right to life of people or accurately determining the starting and ending of a person’s life. On this aspect, there were several points of view in the law literature, each grounded on its own way. In the current criminal law, the two aspects that I have mentioned above are linked, on the one hand, for instance to the issue of the legal classification of the fetal injury during birth until the cut of the umbilical cord and, on the other hand, to the thesis of tissue and organs for transplantation. The new Criminal Code is inspired by the European legislation. In relation to the criminal offences against the person it brings news to us by introducing a new offence, namely fetal injury, referred to in the article 202. Analyzing the legal content of this offence it is impossible not to call into question the situation of the lower limit of the right to life of the person.

  1. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-03-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimised to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the dataset. As a test case we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model we find that the best fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  2. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-06-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  3. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  4. Detection limits of pollutants in water for PGNAA using Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Amokrane, A.; Bode, P.

    2007-01-01

    A basic PGNAA facility with an Am-Be neutron source is described to analyze the pollutants in water. The properties of neutron flux were determined by MCNP calculations. In order to determine the efficiency curve of a HPGe detector, the prompt-gamma rays from chlorine were used and an exponential curve was fitted. The detection limits for typical water sample are also estimated using the statistical fluctuations of the background level in the areas of recorded the prompt-gamma spectrum

  5. Comparison of open source database systems(characteristics, limits of usage)

    OpenAIRE

    Husárik, Braňko

    2008-01-01

    The goal of this work is to compare some chosen open source database systems (Ingres, PostgreSQL, Firebird, Mysql). First part of work is focused on history and present situation of companies which are developing these products. Second part contains the comparision of certain group of specific features and limits. The benchmark of some operations is its own part. Possibilities of usage of mentioned database systems are summarized at the end of work.

  6. Pre-Test Analysis of the MEGAPIE Spallation Source Target Cooling Loop Using the TRAC/AAA Code

    International Nuclear Information System (INIS)

    Bubelis, Evaldas; Coddington, Paul; Leung, Waihung

    2006-01-01

    A pilot project is being undertaken at the Paul Scherrer Institute in Switzerland to test the feasibility of installing a Lead-Bismuth Eutectic (LBE) spallation target in the SINQ facility. Efforts are coordinated under the MEGAPIE project, the main objectives of which are to design, build, operate and decommission a 1 MW spallation neutron source. The technology and experience of building and operating a high power spallation target are of general interest in the design of an Accelerator Driven System (ADS) and in this context MEGAPIE is one of the key experiments. The target cooling is one of the important aspects of the target system design that needs to be studied in detail. Calculations were performed previously using the RELAP5/Mod 3.2.2 and ATHLET codes, but in order to verify the previous code results and to provide another capability to model LBE systems, a similar study of the MEGAPIE target cooling system has been conducted with the TRAC/AAA code. In this paper a comparison is presented for the steady-state results obtained using the above codes. Analysis of transients, such as unregulated cooling of the target, loss of heat sink, the main electro-magnetic pump trip of the LBE loop and unprotected proton beam trip, were studied with TRAC/AAA and compared to those obtained earlier using RELAP5/Mod 3.2.2. This work extends the existing validation data-base of TRAC/AAA to heavy liquid metal systems and comprises the first part of the TRAC/AAA code validation study for LBE systems based on data from the MEGAPIE test facility and corresponding inter-code comparisons. (authors)

  7. Information theory and coding solved problems

    CERN Document Server

    Ivaniš, Predrag

    2017-01-01

    This book is offers a comprehensive overview of information theory and error control coding, using a different approach then in existed literature. The chapters are organized according to the Shannon system model, where one block affects the others. A relatively brief theoretical introduction is provided at the beginning of every chapter, including a few additional examples and explanations, but without any proofs. And a short overview of some aspects of abstract algebra is given at the end of the corresponding chapters. The characteristic complex examples with a lot of illustrations and tables are chosen to provide detailed insights into the nature of the problem. Some limiting cases are presented to illustrate the connections with the theoretical bounds. The numerical values are carefully selected to provide in-depth explanations of the described algorithms. Although the examples in the different chapters can be considered separately, they are mutually connected and the conclusions for one considered proble...

  8. Image Vector Quantization codec indexes filtering

    Directory of Open Access Journals (Sweden)

    Lakhdar Moulay Abdelmounaim

    2012-01-01

    Full Text Available Vector Quantisation (VQ is an efficient coding algorithm that has been widely used in the field of video and image coding, due to its fast decoding efficiency. However, the indexes of VQ are sometimes lost because of signal interference during the transmission. In this paper, we propose an efficient estimation method to conceal and recover the lost indexes on the decoder side, to avoid re-transmitting the whole image again. If the image or video has the limitation of a period of validity, re-transmitting the data wastes the resources of time and network bandwidth. Therefore, using the originally received correct data to estimate and recover the lost data is efficient in time-constrained situations, such as network conferencing or mobile transmissions. In nature images, the pixels are correlated with their neighbours and VQ partitions the image into sub-blocks and quantises them to the indexes that are transmitted; the correlation between adjacent indexes is very strong. There are two parts of the proposed method. The first is pre-processing and the second is an estimation process. In pre-processing, we modify the order of codevectors in the VQ codebook to increase the correlation among the neighbouring vectors. We then use a special filtering method in the estimation process. Using conventional VQ to compress the Lena image and transmit it without any loss of index can achieve a PSNR of 30.429 dB on the decoder. The simulation results demonstrate that our method can estimate the indexes to achieve PSNR values of 29.084 and 28.327 dB when the loss rate is 0.5% and 1%, respectively.

  9. Carbon source from the toroidal pumped limiter during long discharge operation in Tore Supra

    International Nuclear Information System (INIS)

    Dufour, E.; Brosset, C.; Lowry, C.; Bucalossi, J.; Chappuis, P.; Corre, Y.; Desgranges, C.; Guirlet, R.; Gunn, J.; Loarer, T.; Mitteau, R.; Monier-Garbet, P.; Pegourie, B.; Reichle, R.; Thomas, P.; Tsitrone, E.; Hogan, J.; Roubin, P.; Martin, C.; Arnas, C.

    2005-01-01

    A better understanding of deuterium retention mechanisms requires the knowledge of carbon sources in Tore-Supra. The main source of carbon in the vacuum vessel during long discharges is the toroidal pumped limiter (TPL). This work is devoted to the experimental characterisation of the carbon source from the TPL surface during long discharges using a visible spectroscopy diagnostic. Moreover, we present an attempt to perform a carbon balance over a typical campaign and we discuss it with regards to the deuterium in-vessel inventory deduced from particle balance and the deuterium content of the deposited layers. The study shows that only a third of the estimated deuterium trapped in the vessel is trapped in the carbon deposits. Thus, in the present state of our knowledge and characterisation of the permanent retention, one has to search for mechanisms other than co-deposition to explain the deuterium retention in Tore Supra. (A.C.)

  10. Operational limit of a planar DC magnetron cluster source due to target erosion

    International Nuclear Information System (INIS)

    Rai, A.; Mutzke, A.; Bandelow, G.; Schneider, R.; Ganeva, M.; Pipa, A.V.; Hippler, R.

    2013-01-01

    The binary collision-based two dimensional SDTrimSP-2D model has been used to simulate the erosion process of a Cu target and its influence on the operational limit of a planar DC magnetron nanocluster source. The density of free metal atoms in the aggregation region influences the cluster formation and cluster intensity during the target lifetime. The density of the free metal atoms in the aggregation region can only be predicted by taking into account (i) the angular distribution of the sputtered flux from the primary target source and (ii) relative downwards shift of the primary source of sputtered atoms during the erosion process. It is shown that the flux of the sputtered atoms smoothly decreases with the target erosion

  11. Interpretation, with respect to ASME code Case N-318, of limit moment and fatigue tests of lugs welded to pipe

    International Nuclear Information System (INIS)

    Foster, D.C.; Van Duyne, D.A.; Budlong, L.A.; Muffett, J.W.; Wais, E.A.; Streck, G.; Rodabaugh, E.C.

    1990-01-01

    Two nonmandatory ASME code cases have been used often in the evaluation of lugs on nuclear-power- plant piping systems. ASME Code Case N-318 provides guidance for evaluation of the design of rectangular cross-section attachments on Class 2 or 3 piping, and ASME Code Case N-122 provides guidance for evaluation of lugs on Class 1 piping. These code cases have been reviewed and evaluated based on available test data. The results indicate that the Code cases are overly conservative. Recommendations for revisions to the cases are presented which, if adopted, will reduce the overconservatism

  12. Experimental investigation of thermal limits in parallel plate configuration for the Advanced Neutron Source Reactor

    International Nuclear Information System (INIS)

    Siman-Tov, M.; Felde, D.K.; Kaminaga, M.; Yoder, G.L.

    1993-01-01

    The Advanced Neutron Source Reactor (ANSR) is currently being designed to become the world's highest-flux, steady-state, thermal neutron source for scientific experiments. Highly subcooled, heavy-water coolant flows vertically upward at a very high velocity of 25 m/s through parallel aluminum fuel-plates. The core has average and peak heat fluxes of 5.9 and 12 MW/m 2 , respectively. In this configuration, both flow excursion (FE) and true critical heat flux (CHF), represent potential thermal limitations. The availability of experimental data for both FE and true CHF at the conditions applicable to the ANSR is very limited. A Thermal Hydraulic Test Loop (THTL) facility was designed and built to simulate a full-length coolant subchannel of the core, allowing experimental determination of both thermal limits under the expected ANSR T/H conditions. A series of FE tests with water flowing vertically upward was completed over a nominal heat flux range of 6 to 14 MW/m 2 and a corresponding velocity range of 8 to 21 m/s. Both the exit pressure (1.7 MPa) and inlet temperature (45 degrees C) were maintained constant for these tests, while the loop was operated in a ''stiff''(constant flow) mode. Limited experiments were also conducted at 12 MW/m 2 using a ''soft'' mode (near constant pressure-drop) for actual FE burnout tests and using a ''stiff' mode for true CHF tests, to compare with the original FE experiments

  13. Radiation Shielding Information Center: a source of computer codes and data for fusion neutronics studies

    International Nuclear Information System (INIS)

    McGill, B.L.; Roussin, R.W.; Trubey, D.K.; Maskewitz, B.F.

    1980-01-01

    The Radiation Shielding Information Center (RSIC), established in 1962 to collect, package, analyze, and disseminate information, computer codes, and data in the area of radiation transport related to fission, is now being utilized to support fusion neutronics technology. The major activities include: (1) answering technical inquiries on radiation transport problems, (2) collecting, packaging, testing, and disseminating computing technology and data libraries, and (3) reviewing literature and operating a computer-based information retrieval system containing material pertinent to radiation transport analysis. The computer codes emphasize methods for solving the Boltzmann equation such as the discrete ordinates and Monte Carlo techniques, both of which are widely used in fusion neutronics. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results

  14. kspectrum: an open-source code for high-resolution molecular absorption spectra production

    International Nuclear Information System (INIS)

    Eymet, V.; Coustet, C.; Piaud, B.

    2016-01-01

    We present the kspectrum, scientific code that produces high-resolution synthetic absorption spectra from public molecular transition parameters databases. This code was originally required by the atmospheric and astrophysics communities, and its evolution is now driven by new scientific projects among the user community. Since it was designed without any optimization that would be specific to any particular application field, its use could also be extended to other domains. kspectrum produces spectral data that can subsequently be used either for high-resolution radiative transfer simulations, or for producing statistic spectral model parameters using additional tools. This is a open project that aims at providing an up-to-date tool that takes advantage of modern computational hardware and recent parallelization libraries. It is currently provided by Méso-Star (http://www.meso-star.com) under the CeCILL license, and benefits from regular updates and improvements. (paper)

  15. 49 CFR Appendix B to Part 564 - Information To Be Submitted for Long Life Replaceable Light Sources of Limited Definition

    Science.gov (United States)

    2010-10-01

    ...—Information To Be Submitted for Long Life Replaceable Light Sources of Limited Definition I. Filament or... Source that Operates With a Ballast and Rated Life of the Light Source/Ballast Combination. A. Maximum power (in watts). B. Luminous Flux (in lumens). C. Rated laboratory life of the light source/ballast...

  16. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    International Nuclear Information System (INIS)

    Lu Cunheng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving count rate (derived from total receiving count rate of the bottom and side surface). Finally, according to the measuring value, it is proved that imitating the change of total receiving gamma ray exposure rate of the bottom and side surfaces with this regularity in a certain high area is feasible

  17. Monoparametric family of metrics derived from classical Jensen-Shannon divergence

    Science.gov (United States)

    Osán, Tristán M.; Bussandri, Diego G.; Lamberti, Pedro W.

    2018-04-01

    Jensen-Shannon divergence is a well known multi-purpose measure of dissimilarity between probability distributions. It has been proven that the square root of this quantity is a true metric in the sense that, in addition to the basic properties of a distance, it also satisfies the triangle inequality. In this work we extend this last result to prove that in fact it is possible to derive a monoparametric family of metrics from the classical Jensen-Shannon divergence. Motivated by our results, an application into the field of symbolic sequences segmentation is explored. Additionally, we analyze the possibility to extend this result into the quantum realm.

  18. Shannon's Wayにおける恋と結婚の障害

    OpenAIRE

    中村, 豪; Takeshi, Nakamura; 昭和女子大学英語コミュニケーション学科

    2016-01-01

    The theme of this study is Shannon's Way as love story: the hero and the heroine's love and the obstacles to their marriage. The novel was written by a Scottish writer and physician, A.J.Cronin(in full Archibald Joseph Cronin, 1896-1981)and published in 1948. The setting of the story is chiefly Winton, a fictitious city based on Glasgow. The hero is Robert Shannon, a twenty-four-year-old poor but excellent researcher and doctor whose ambition is to be successful in medical science by a great ...

  19. Improvements in data display

    International Nuclear Information System (INIS)

    Ellis, G.W.

    1979-01-01

    An analog signal processor is described in this patent for connecting a source of analog signals to a cathode ray tube display in order to extend the dynamic range of the display. This has important applications in the field of computerised X-ray tomography since significant medical information, such as tumours in soft tissue, is often represented by minimal level changes in image density. Cathode ray tube displays are limited to approximately 15 intensity levels. Thus if both strong and weak absorption of the X-rays occurs, the dynamic range of the transmitted signals will be too large to permit small variations to be examined directly on a cathode ray display. Present tomographic image reconstruction methods are capable of quantising X-ray absorption density measurements into 256 or more distinct levels and a description is given of the electronics which enables the upper and lower range of intensity levels to be independently set and continuously varied. (UK)

  20. Four energy group neutron flux distribution in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION code

    International Nuclear Information System (INIS)

    Khattab, K.; Omar, H.; Ghazi, N.

    2009-01-01

    A 3-D (R, θ , Z) neutronic model for the Miniature Neutron Source Reactor (MNSR) was developed earlier to conduct the reactor neutronic analysis. The group constants for all the reactor components were generated using the WIMSD4 code. The reactor excess reactivity and the four group neutron flux distributions were calculated using the CITATION code. This model is used in this paper to calculate the point wise four energy group neutron flux distributions in the MNSR versus the radius, angle and reactor axial directions. Good agreement is noticed between the measured and the calculated thermal neutron flux in the inner and the outer irradiation site with relative difference less than 7% and 5% respectively. (author)

  1. Investigation of Anisotropy Caused by Cylinder Applicator on Dose Distribution around Cs-137 Brachytherapy Source using MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Sedigheh Sina

    2011-06-01

    Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources.  Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively.  By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.

  2. Developing open-source codes for electromagnetic geophysics using industry support

    Science.gov (United States)

    Key, K.

    2017-12-01

    Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.

  3. Calculation of the effective dose from natural radioactivity sources in soil using MCNP code

    International Nuclear Information System (INIS)

    Krstic, D.; Nikezic, D.

    2008-01-01

    Full text: Effective dose delivered by photon emitted from natural radioactivity in soil was calculated in this report. Calculations have been done for the most common natural radionuclides in soil as 238 U, 232 Th series and 40 K. A ORNL age-dependent phantom and the Monte Carlo transport code MCNP-4B were employed to calculate the energy deposited in all organs of phantom.The effective dose was calculated according to ICRP74 recommendations. Conversion coefficients of effective dose per air kerma were determined. Results obtained here were compared with other authors

  4. In-vessel source term analysis code TRACER version 2.3. User's manual

    International Nuclear Information System (INIS)

    Toyohara, Daisuke; Ohno, Shuji; Hamada, Hirotsugu; Miyahara, Shinya

    2005-01-01

    A computer code TRACER (Transport Phenomena of Radionuclides for Accident Consequence Evaluation of Reactor) version 2.3 has been developed to evaluate species and quantities of fission products (FPs) released into cover gas during a fuel pin failure accident in an LMFBR. The TRACER version 2.3 includes new or modified models shown below. a) Both model: a new model for FPs release from fuel. b) Modified model for FPs transfer from fuel to bubbles or sodium coolant. c) Modified model for bubbles dynamics in coolant. Computational models, input data and output data of the TRACER version 2.3 are described in this user's manual. (author)

  5. Source limitation of carbon gas emissions in high-elevation mountain streams and lakes

    Science.gov (United States)

    Crawford, John T.; Dornblaser, Mark M.; Stanley, Emily H.; Clow, David W.; Striegl, Robert G.

    2015-01-01

    Inland waters are an important component of the global carbon cycle through transport, storage, and direct emissions of CO2 and CH4 to the atmosphere. Despite predictions of high physical gas exchange rates due to turbulent flows and ubiquitous supersaturation of CO2—and perhaps also CH4—patterns of gas emissions are essentially undocumented for high mountain ecosystems. Much like other headwater networks around the globe, we found that high-elevation streams in Rocky Mountain National Park, USA, were supersaturated with CO2 during the growing season and were net sources to the atmosphere. CO2concentrations in lakes, on the other hand, tended to be less than atmospheric equilibrium during the open water season. CO2 and CH4 emissions from the aquatic conduit were relatively small compared to many parts of the globe. Irrespective of the physical template for high gas exchange (high k), we found evidence of CO2 source limitation to mountain streams during the growing season, which limits overall CO2emissions. Our results suggest a reduced importance of aquatic ecosystems for carbon cycling in high-elevation landscapes having limited soil development and high CO2 consumption via mineral weathering.

  6. The relation between classical and quantum mechanics

    International Nuclear Information System (INIS)

    Taylor, Peter.

    1984-01-01

    The thesis examines the relationship between classical and quantum mechanics from philosophical, mathematical and physical standpoints. Arguments are presented in favour of 'conjectural realism' in scientific theories, distinguished by explicit contextual structure and empirical testability. The formulations of classical and quantum mechanics, based on a general theory of mechanics is investigated, as well as the mathematical treatments of these subjects. Finally the thesis questions the validity of 'classical limits' and 'quantisations' in intertheoretic reduction. (UK)

  7. CMOS SPAD-based image sensor for single photon counting and time of flight imaging

    OpenAIRE

    Dutton, Neale Arthur William

    2016-01-01

    The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications....

  8. The rise in the positron fraction. Distance limits on positron point sources from cosmic ray arrival directions and diffuse gamma-rays

    Energy Technology Data Exchange (ETDEWEB)

    Gebauer, Iris; Bentele, Rosemarie [Karlsruhe Institute of Technology, Karlsruhe (Germany)

    2016-07-01

    The rise in the positron fraction as observed by AMS and previously by PAMELA, cannot be explained by the standard paradigm of cosmic ray transport in which positrons are produced by cosmic-ray-gas interactions in the interstellar medium. Possible explanations are pulsars, which produce energetic electron-positron pairs in their rotating magnetic fields, or the annihilation of dark matter. Here we assume that these positrons originate from a single close-by point source, producing equal amounts of electrons and positrons. The propagation and energy losses of these electrons and positrons are calculated numerically using the DRAGON code, the source properties are optimized to best describe the AMS data. Using the FERMI-LAT limits on a possible dipole anisotropy in electron and positron arrival directions, we put a limit on the minimum distance of such a point source. The energy losses that these energetic electrons and positrons suffer on their way through the galaxy create gamma ray photons through bremsstrahlung and Inverse Compton scattering. Using the measurement of diffuse gamma rays from Fermi-LAT we put a limit on the maximum distance of such a point source. We find that a single electron positron point source powerful enough to explain the locally observed positron fraction must reside between 225 pc and 3.7 kpc distance from the sun and compare to known pulsars.

  9. Approximate source conditions for nonlinear ill-posed problems—chances and limitations

    International Nuclear Information System (INIS)

    Hein, Torsten; Hofmann, Bernd

    2009-01-01

    In the recent past the authors, with collaborators, have published convergence rate results for regularized solutions of linear ill-posed operator equations by avoiding the usual assumption that the solutions satisfy prescribed source conditions. Instead the degree of violation of such source conditions is expressed by distance functions d(R) depending on a radius R ≥ 0 which is an upper bound of the norm of source elements under consideration. If d(R) tends to zero as R → ∞ an appropriate balancing of occurring regularization error terms yields convergence rates results. This approach was called the method of approximate source conditions, originally developed in a Hilbert space setting. The goal of this paper is to formulate chances and limitations of an application of this method to nonlinear ill-posed problems in reflexive Banach spaces and to complement the field of low order convergence rates results in nonlinear regularization theory. In particular, we are going to establish convergence rates for a variant of Tikhonov regularization. To keep structural nonlinearity conditions simple, we update the concept of degree of nonlinearity in Hilbert spaces to a Bregman distance setting in Banach spaces

  10. Detection limits for real-time source water monitoring using indigenous freshwater microalgae

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Jr, Miguel [ORNL; Greenbaum, Elias [ORNL

    2009-01-01

    This research identified toxin detection limits using the variable fluorescence of naturally occurring microalgae in source drinking water for five chemical toxins with different molecular structures and modes of toxicity. The five chemicals investigated were atrazine, Diuron, paraquat, methyl parathion, and potassium cyanide. Absolute threshold sensitivities of the algae for detection of the toxins in unmodified source drinking water were measured. Differential kinetics between the rate of action of the toxins and natural changes in algal physiology, such as diurnal photoinhibition, are significant enough that effects of the toxin can be detected and distinguished from the natural variance. This is true even for physiologically impaired algae where diminished photosynthetic capacity may arise from uncontrollable external factors such as nutrient starvation. Photoinhibition induced by high levels of solar radiation is a predictable and reversible phenomenon that can be dealt with using a period of dark adaption of 30 minutes or more.

  11. Translational invariance in bag model

    International Nuclear Information System (INIS)

    Megahed, F.

    1981-10-01

    In this thesis, the effect of restoring the translational invariance to an approximation to the MIT bag model on the calculation of deep inelastic structure functions is investigated. In chapter one, the model and its major problems are reviewed and Dirac's method of quantisation is outlined. This method is used in chapter two to quantise a two-dimensional complex scalar bag and formal expressions for the form factor and the structure functions are obtained. In chapter three, the expression for the structure function away from the Bjorken limit is studied. The corrections to the L 0 - approximation to the structure function is calculated in chapter four and it is shown to be large. Finally, in chapter five, a bag-like model for kinematic corrections to structure functions is introduced and agreement with data between 2 and 6 (GeV/C) 2 is obtained. (author)

  12. Preliminary limits on the flux of muon neutrinos from extraterrestrial point sources

    International Nuclear Information System (INIS)

    Bionta, R.M.; Blewitt, G.; Bratton, C.B.

    1985-01-01

    We present the arrival directions of 117 upward-going muon events collected with the IMB proton lifetime detector during 317 days of live detector operation. The rate of upward-going muons observed in our detector was found to be consistent with the rate expected from atmospheric neutrino production. The upper limit on the total flux of extraterrestrial neutrinos >1 GeV is 2 -sec. Using our data and a Monte Carlo simulation of high energy muon production in the earth surrounding the detector, we place limits on the flux of neutrinos from a point source in the Vela X-2 system of 2 -sec with E > 1 GeV. 6 refs., 5 figs

  13. Probabilities and Shannon's Entropy in the Everett Many-Worlds Theory

    Directory of Open Access Journals (Sweden)

    Andreas Wichert

    2016-12-01

    Full Text Available Following a controversial suggestion by David Deutsch that decision theory can solve the problem of probabilities in the Everett many-worlds we suggest that the probabilities are induced by Shannon's entropy that measures the uncertainty of events. We argue that a relational person prefers certainty to uncertainty due to fundamental biological principle of homeostasis.

  14. Study of counter current flow limitation model of MARS-KS and SPACE codes under Dukler's air/water flooding test conditions

    International Nuclear Information System (INIS)

    Lee, Won Woong; Kim, Min Gil; Lee, Jeong Ik; Bang, Young Seok

    2015-01-01

    In particular, CCFL(the counter current flow limitation) occurs in components such as hot leg, downcomer annulus and steam generator inlet plenum during LOCA which is possible to have flows in two opposite directions. Therefore, CCFL is one of the thermal-hydraulic models which has significant effect on the reactor safety analysis code performance. In this study, the CCFL model will be evaluated with MARS-KS based on two-phase two-field governing equations and SPACE code based on two-phase three-field governing equations. This study will be conducted by comparing MARS-KS code which is being used for evaluating the safety of a Korean Nuclear Power Plant and SPACE code which is currently under assessment for evaluating the safety of the designed nuclear power plant. In this study, comparison of the results of liquid upflow and liquid downflow rate for different gas flow rate from two code to the famous Dukler's CCFL experimental data are presented. This study will be helpful to understand the difference between system analysis codes with different governing equations, models and correlations, and further improving the accuracy of system analysis codes. In the nuclear reactor system, CCFL is an important phenomenon for evaluating the safety of nuclear reactors. This is because CCFL phenomenon can limit injection of ECCS water when CCFL occurs in components such as hot leg, downcomer annulus or steam generator inlet plenum during LOCA which is possible to flow in two opposite directions. Therefore, CCFL is one of the thermal-hydraulic models which has significant effect on the reactor safety analysis code performance. In this study, the CCFL model was evaluated with MARS-KS and SPACE codes for studying the difference between system analysis codes with different governing equations, models and correlations. This study was conducted by comparing MARS-KS and SPACE code results of liquid upflow and liquid downflow rate for different gas flow rate to the famous Dukler

  15. Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source

    Science.gov (United States)

    Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.

    2014-06-01

    To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA

  16. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  17. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  18. Bug-Fixing and Code-Writing: The Private Provision of Open Source Software

    DEFF Research Database (Denmark)

    Bitzer, Jürgen; Schröder, Philipp

    2002-01-01

    Open source software (OSS) is a public good. A self-interested individual would consider providing such software, if the benefits he gained from having it justified the cost of programming. Nevertheless each agent is tempted to free ride and wait for others to develop the software instead...

  19. SETMDC: Preprocessor for CHECKR, FIZCON, INTER, etc. ENDF Utility source codes

    International Nuclear Information System (INIS)

    Dunford, Charles L.

    2002-01-01

    Description of program or function: SETMDC-6.13 is a utility program that converts the source decks of the following set of programs to different computers: CHECKR-6.13; FIZCON-6.13; GETMAT-6.13; INTER-6.13; LISTEF-6; PLOTEF-6; PSYCHE-6; STANEF-6.13

  20. ON CODE REFACTORING OF THE DIALOG SUBSYSTEM OF CDSS PLATFORM FOR THE OPEN-SOURCE MIS OPENMRS

    Directory of Open Access Journals (Sweden)

    A. V. Semenets

    2016-08-01

    The open-source MIS OpenMRS developer tools and software API are reviewed. The results of code refactoring of the dialog subsystem of the CDSS platform which is made as module for the open-source MIS OpenMRS are presented. The structure of information model of database of the CDSS dialog subsystem was updated according with MIS OpenMRS requirements. The Model-View-Controller (MVC based approach to the CDSS dialog subsystem architecture was re-implemented with Java programming language using Spring and Hibernate frameworks. The MIS OpenMRS Encounter portlet form for the CDSS dialog subsystem integration is developed as an extension. The administrative module of the CDSS platform is recreated. The data exchanging formats and methods for interaction of OpenMRS CDSS dialog subsystem module and DecisionTree GAE service are re-implemented with help of AJAX technology via jQuery library