WorldWideScience

Sample records for superposition principle measurements

  1. A superposition principle in quantum logics

    International Nuclear Information System (INIS)

    Pulmannova, S.

    1976-01-01

    A new definition of the superposition principle in quantum logics is given which enables us to define the sectors. It is shown that the superposition principle holds only in the irreducible quantum logics. (orig.) [de

  2. The principle of superposition in human prehension.

    Science.gov (United States)

    Zatsiorsky, Vladimir M; Latash, Mark L; Gao, Fan; Shim, Jae Kun

    2004-03-01

    The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: "Grasp the object stronger/weaker to prevent slipping" and "Maintain the rotational equilibrium of the object". The effects of the two commands are summed up.

  3. On the superposition principle and its physics content

    International Nuclear Information System (INIS)

    Roos, M.

    1984-01-01

    What is commonly denoted the superposition principle is shown to consist of three different physical assumptions: conservation of probability, completeness, and some phase conditions. The latter conditions form the physical assumptions of the superposition principle. These phase conditions are exemplified by the Kobayashi-Maskawa matrix. Some suggestions for testing the superposition principle are given. (Auth.)

  4. On the superposition principle in interference experiments.

    Science.gov (United States)

    Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi

    2015-05-14

    The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.

  5. Testing the quantum superposition principle: matter waves and beyond

    Science.gov (United States)

    Ulbricht, Hendrik

    2015-05-01

    New technological developments allow to explore the quantum properties of very complex systems, bringing the question of whether also macroscopic systems share such features, within experimental reach. The interest in this question is increased by the fact that, on the theory side, many suggest that the quantum superposition principle is not exact, departures from it being the larger, the more macroscopic the system. Testing the superposition principle intrinsically also means to test suggested extensions of quantum theory, so-called collapse models. We will report on three new proposals to experimentally test the superposition principle with nanoparticle interferometry, optomechanical devices and by spectroscopic experiments in the frequency domain. We will also report on the status of optical levitation and cooling experiments with nanoparticles in our labs, towards an Earth bound matter-wave interferometer to test the superposition principle for a particle mass of one million amu (atomic mass unit).

  6. Superposition Principle in Auger Recombination of Charged and Neutral Multicarrier States in Semiconductor Quantum Dots.

    Science.gov (United States)

    Wu, Kaifeng; Lim, Jaehoon; Klimov, Victor I

    2017-08-22

    Application of colloidal semiconductor quantum dots (QDs) in optical and optoelectronic devices is often complicated by unintentional generation of extra charges, which opens fast nonradiative Auger recombination pathways whereby the recombination energy of an exciton is quickly transferred to the extra carrier(s) and ultimately dissipated as heat. Previous studies of Auger recombination have primarily focused on neutral and, more recently, negatively charged multicarrier states. Auger dynamics of positively charged species remains more poorly explored due to difficulties in creating, stabilizing, and detecting excess holes in the QDs. Here we apply photochemical doping to prepare both negatively and positively charged CdSe/CdS QDs with two distinct core/shell interfacial profiles ("sharp" versus "smooth"). Using neutral and charged QD samples we evaluate Auger lifetimes of biexcitons, negative and positive trions (an exciton with an extra electron or a hole, respectively), and multiply negatively charged excitons. Using these measurements, we demonstrate that Auger decay of both neutral and charged multicarrier states can be presented as a superposition of independent elementary three-particle Auger events. As one of the manifestations of the superposition principle, we observe that the biexciton Auger decay rate can be presented as a sum of the Auger rates for independent negative and positive trion pathways. By comparing the measurements on the QDs with the "sharp" versus "smooth" interfaces, we also find that while affecting the absolute values of Auger lifetimes, manipulation of the shape of the confinement potential does not lead to violation of the superposition principle, which still allows us to accurately predict the biexciton Auger lifetimes based on the measured negative and positive trion dynamics. These findings indicate considerable robustness of the superposition principle as applied to Auger decay of charged and neutral multicarrier states

  7. A multidimensional superposition principle and wave switching in integrable and nonintegrable soliton models

    Energy Technology Data Exchange (ETDEWEB)

    Alexeyev, Alexander A [Laboratory of Computer Physics and Mathematical Simulation, Research Division, Room 247, Faculty of Phys.-Math. and Natural Sciences, Peoples' Friendship University of Russia, 6 Miklukho-Maklaya street, Moscow 117198 (Russian Federation) and Department of Mathematics 1, Faculty of Cybernetics, Moscow State Institute of Radio Engineering, Electronics and Automatics, 78 Vernadskogo Avenue, Moscow 117454 (Russian Federation)

    2004-11-26

    In the framework of a multidimensional superposition principle a series of computer experiments with integrable and nonintegrable models are carried out with the goal of verifying the existence of switching effect and superposition in soliton-perturbation interactions for a wide class of nonlinear PDEs. (letter to the editor)

  8. Collapsing a perfect superposition to a chosen quantum state without measurement.

    Directory of Open Access Journals (Sweden)

    Ahmed Younes

    Full Text Available Given a perfect superposition of [Formula: see text] states on a quantum system of [Formula: see text] qubits. We propose a fast quantum algorithm for collapsing the perfect superposition to a chosen quantum state [Formula: see text] without applying any measurements. The basic idea is to use a phase destruction mechanism. Two operators are used, the first operator applies a phase shift and a temporary entanglement to mark [Formula: see text] in the superposition, and the second operator applies selective phase shifts on the states in the superposition according to their Hamming distance with [Formula: see text]. The generated state can be used as an excellent input state for testing quantum memories and linear optics quantum computers. We make no assumptions about the used operators and applied quantum gates, but our result implies that for this purpose the number of qubits in the quantum register offers no advantage, in principle, over the obvious measurement-based feedback protocol.

  9. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1975-12-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution then rearranged into the superposition principle. The inductive proof is simpler than Rostoker's, although similar in some ways; it differs in that first order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  10. The general use of the time-temperature-pressure superposition principle

    DEFF Research Database (Denmark)

    Rasmussen, Henrik Koblitz

    This note is a supplement to Dynamic of Polymeric Liquids (DPL) section 3.6(a). DPL do only concern material functions and only the effect of the temperature on these. This is a short introduction to the general use of the time-temperature-pressure superposition principle.......This note is a supplement to Dynamic of Polymeric Liquids (DPL) section 3.6(a). DPL do only concern material functions and only the effect of the temperature on these. This is a short introduction to the general use of the time-temperature-pressure superposition principle....

  11. Superposition Quantification

    Science.gov (United States)

    Chang, Li-Na; Luo, Shun-Long; Sun, Yuan

    2017-11-01

    The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182

  12. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1976-01-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented; the deductive approach appears here for the first time in the literature. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution is then re-arranged into the superposition principle. The inductive proof is simpler than Rostoker's although similar in some ways; it differs in that first-order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  13. Approach to the nonrelatiVistic scattering theory based on the causality superposition and unitarity principles

    International Nuclear Information System (INIS)

    Gajnutdinov, R.Kh.

    1983-01-01

    Possibility is studied to build the nonrelativistic scattering theory on the base of the general physical principles: causality, superposition, and unitarity, making no use of the Schroedinger formalism. The suggested approach is shown to be more general than the nonrelativistic scattering theory based on the Schroedinger equation. The approach is applied to build a model ofthe scattering theory for a system which consists of heavy nonrelativistic particles and a light relativistic particle

  14. Projective measurement onto arbitrary superposition of weak coherent state bases

    DEFF Research Database (Denmark)

    Izumi, Shuro; Takeoka, Masahiro; Wakui, Kentaro

    2018-01-01

    One of the peculiar features in quantum mechanics is that a superposition of macroscopically distinct states can exist. In optical system, this is highlighted by a superposition of coherent states (SCS), i.e. a superposition of classical states. Recently this highly nontrivial quantum state and i...

  15. Decoherence bypass of macroscopic superpositions in quantum measurement

    International Nuclear Information System (INIS)

    Spehner, Dominique; Haake, Fritz

    2008-01-01

    We study a class of quantum measurement models. A microscopic object is entangled with a macroscopic pointer such that a distinct pointer position is tied to each eigenvalue of the measured object observable. Those different pointer positions mutually decohere under the influence of an environment. Overcoming limitations of previous approaches we (i) cope with initial correlations between pointer and environment by considering them initially in a metastable local thermal equilibrium, (ii) allow for object-pointer entanglement and environment-induced decoherence of distinct pointer readouts to proceed simultaneously, such that mixtures of macroscopically distinct object-pointer product states arise without intervening macroscopic superpositions, and (iii) go beyond the Markovian treatment of decoherence. (fast track communication)

  16. Long-term creep modeling of wood using time temperature superposition principle

    OpenAIRE

    Gamalath, Sandhya Samarasinghe

    1991-01-01

    Long-term creep and recovery models (master curves) were developed from short-term data using the time temperature superposition principle (TTSP) for kiln-dried southern pine loaded in compression parallel-to-grain and exposed to constant environmental conditions (~70°F, ~9%EMC). Short-term accelerated creep (17 hour) and recovery (35 hour) data were collected for each specimen at a range of temperature (70°F-150°F) and constant moisture condition of 9%. The compressive stra...

  17. Measurement-Induced Macroscopic Superposition States in Cavity Optomechanics

    DEFF Research Database (Denmark)

    Hoff, Ulrich Busk; Kollath-Bönig, Johann; Neergaard-Nielsen, Jonas Schou

    2016-01-01

    A novel protocol for generating quantum superpositions of macroscopically distinct states of a bulk mechanical oscillator is proposed, compatible with existing optomechanical devices operating in the bad-cavity limit. By combining a pulsed optomechanical quantum nondemolition (QND) interaction...

  18. Generalization of Abel's mechanical problem: The extended isochronicity condition and the superposition principle

    Energy Technology Data Exchange (ETDEWEB)

    Kinugawa, Tohru, E-mail: kinugawa@phoenix.kobe-u.ac.jp [Institute for Promotion of Higher Education, Kobe University, Kobe 657-8501 (Japan)

    2014-02-15

    This paper presents a simple but nontrivial generalization of Abel's mechanical problem, based on the extended isochronicity condition and the superposition principle. There are two primary aims. The first one is to reveal the linear relation between the transit-time T and the travel-length X hidden behind the isochronicity problem that is usually discussed in terms of the nonlinear equation of motion (d{sup 2}X)/(dt{sup 2}) +(dU)/(dX) =0 with U(X) being an unknown potential. Second, the isochronicity condition is extended for the possible Abel-transform approach to designing the isochronous trajectories of charged particles in spectrometers and/or accelerators for time-resolving experiments. Our approach is based on the integral formula for the oscillatory motion by Landau and Lifshitz [Mechanics (Pergamon, Oxford, 1976), pp. 27–29]. The same formula is used to treat the non-periodic motion that is driven by U(X). Specifically, this unknown potential is determined by the (linear) Abel transform X(U) ∝ A[T(E)], where X(U) is the inverse function of U(X), A=(1/√(π))∫{sub 0}{sup E}dU/√(E−U) is the so-called Abel operator, and T(E) is the prescribed transit-time for a particle with energy E to spend in the region of interest. Based on this Abel-transform approach, we have introduced the extended isochronicity condition: typically, τ = T{sub A}(E) + T{sub N}(E) where τ is a constant period, T{sub A}(E) is the transit-time in the Abel type [A-type] region spanning X > 0 and T{sub N}(E) is that in the Non-Abel type [N-type] region covering X < 0. As for the A-type region in X > 0, the unknown inverse function X{sub A}(U) is determined from T{sub A}(E) via the Abel-transform relation X{sub A}(U) ∝ A[T{sub A}(E)]. In contrast, the N-type region in X < 0 does not ensure this linear relation: the region is covered with a predetermined potential U{sub N}(X) of some arbitrary choice, not necessarily obeying the Abel-transform relation. In

  19. Superposition and macroscopic observation

    International Nuclear Information System (INIS)

    Cartwright, N.D.

    1976-01-01

    The principle of superposition has long plagued the quantum mechanics of macroscopic bodies. In at least one well-known situation - that of measurement - quantum mechanics predicts a superposition. It is customary to try to reconcile macroscopic reality and quantum mechanics by reducing the superposition to a mixture. To establish consistency with quantum mechanics, values for the apparatus after a measurement are to be distributed in the way predicted by the superposition. The distributions observed, however, are those of the mixture. The statistical predictions of quantum mechanics, it appears, are not borne out by observation in macroscopic situations. It has been shown that, insofar as specific ergodic hypotheses apply to the apparatus after the interaction, the superposition which evolves is experimentally indistinguishable from the corresponding mixture. In this paper an idealized model of the measuring situation is presented in which this consistency can be demonstrated. It includes a simplified version of the measurement solution proposed by Daneri, Loinger, and Prosperi (1962). The model should make clear the kind of statistical evidence required to carry of this approach, and the role of the ergodic hypotheses assumed. (Auth.)

  20. Evaluation of Class II treatment by cephalometric regional superpositions versus conventional measurements.

    Science.gov (United States)

    Efstratiadis, Stella; Baumrind, Sheldon; Shofer, Frances; Jacobsson-Hunt, Ulla; Laster, Larry; Ghafari, Joseph

    2005-11-01

    The aims of this study were (1) to evaluate cephalometric changes in subjects with Class II Division 1 malocclusion who were treated with headgear (HG) or Fränkel function regulator (FR) and (2) to compare findings from regional superpositions of cephalometric structures with those from conventional cephalometric measurements. Cephalographs were taken at baseline, after 1 year, and after 2 years of 65 children enrolled in a prospective randomized clinical trial. The spatial location of the landmarks derived from regional superpositions was evaluated in a coordinate system oriented on natural head position. The superpositions included the best anatomic fit of the anterior cranial base, maxillary base, and mandibular structures. Both the HG and the FR were effective in correcting the distoclusion, and they generated enhanced differential growth between the jaws. Differences between cranial and maxillary superpositions regarding mandibular displacement (Point B, pogonion, gnathion, menton) were noted: the HG had a more horizontal vector on maxillary superposition that was also greater (.0001 < P < .05) than the horizontal displacement observed with the FR. This discrepancy appeared to be related to (1) the clockwise (backward) rotation of the palatal and mandibular planes observed with the HG; the palatal plane's rotation, which was transferred through the occlusion to the mandibular plane, was factored out on maxillary superposition; and (2) the interaction between the inclination of the maxillary incisors and the forward movement of the mandible during growth. Findings from superpositions agreed with conventional angular and linear measurements regarding the basic conclusions for the primary effects of HG and FR. However, the results suggest that inferences of mandibular displacement are more reliable from maxillary than cranial superposition when evaluating occlusal changes during treatment.

  1. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    International Nuclear Information System (INIS)

    Daoud, M.; Ahl Laamara, R.

    2012-01-01

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states

  2. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    Energy Technology Data Exchange (ETDEWEB)

    Daoud, M., E-mail: m_daoud@hotmail.com [Department of Physics, Faculty of Sciences, University Ibnou Zohr, Agadir (Morocco); Ahl Laamara, R., E-mail: ahllaamara@gmail.com [LPHE-Modeling and Simulation, Faculty of Sciences, University Mohammed V, Rabat (Morocco); Centre of Physics and Mathematics, CPM, CNESTEN, Rabat (Morocco)

    2012-07-16

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states.

  3. Lifetime Prediction of Nano-Silica based Glass Fibre/Epoxy composite by Time Temperature Superposition Principle

    Science.gov (United States)

    Anand, Abhijeet; Banerjee, Poulami; Prusty, Rajesh Kumar; Ray, Bankin Chandra

    2018-03-01

    The incorporation of nano fillers in Fibre reinforced polymer (FRP) composites has been a source of experimentation for researchers. Addition of nano fillers has been found to improve mechanical, thermal as well as electrical properties of Glass fibre reinforced polymer (GFRP) composites. The in-plane mechanical properties of GFRP composite are mainly controlled by fibers and therefore exhibit good values. However, composite exhibits poor through-thickness properties, in which the matrix and interface are the dominant factors. Therefore, it is conducive to modify the matrix through dispersion of nano fillers. Creep is defined as the plastic deformation experienced by a material for a temperature at constant stress over a prolonged period of time. Determination of Master Curve using time-temperature superposition principle is conducive for predicting the lifetime of materials involved in naval and structural applications. This is because such materials remain in service for a prolonged time period before failure which is difficult to be kept marked. However, the failure analysis can be extrapolated from its behaviour in a shorter time at an elevated temperature as is done in master creep analysis. The present research work dealt with time-temperature analysis of 0.1% SiO2-based GFRP composites fabricated through hand-layup method. Composition of 0.1% for SiO2nano fillers with respect to the weight of the fibers was observed to provide optimized flexural properties. Time and temperature dependence of flexural properties of GFRP composites with and without nano SiO2 was determined by conducting 3-point bend flexural creep tests over a range of temperature. Stepwise isothermal creep tests from room temperature (30°C) to the glass transition temperature Tg (120°C) were performed with an alternative creep/relaxation period of 1 hour at each temperature. A constant stress of 40MPa was applied during the creep tests. The time-temperature superposition principle was

  4. On-line and real-time diagnosis method for proton membrane fuel cell (PEMFC) stack by the superposition principle

    Science.gov (United States)

    Lee, Young-Hyun; Kim, Jonghyeon; Yoo, Seungyeol

    2016-09-01

    The critical cell voltage drop in a stack can be followed by stack defect. A method of detecting defective cell is the cell voltage monitoring. The other methods are based on the nonlinear frequency response. In this paper, the superposition principle for the diagnosis of PEMFC stack is introduced. If critical cell voltage drops exist, the stack behaves as a nonlinear system. This nonlinearity can explicitly appear in the ohmic overpotential region of a voltage-current curve. To detect the critical cell voltage drop, a stack is excited by two input direct test-currents which have smaller amplitude than an operating stack current and have an equal distance value from the operating current. If the difference between one voltage excited by a test current and the voltage excited by a load current is not equal to the difference between the other voltage response and the voltage excited by the load current, the stack system acts as a nonlinear system. This means that there is a critical cell voltage drop. The deviation from the value zero of the difference reflects the grade of the system nonlinearity. A simulation model for the stack diagnosis is developed based on the SPP, and experimentally validated.

  5. Long-distance measurement-device-independent quantum key distribution with coherent-state superpositions.

    Science.gov (United States)

    Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B

    2014-09-15

    Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.

  6. Measuring the band structures of periodic beams using the wave superposition method

    Science.gov (United States)

    Junyi, L.; Ruffini, V.; Balint, D.

    2016-11-01

    Phononic crystals and elastic metamaterials are artificially engineered periodic structures that have several interesting properties, such as negative effective stiffness in certain frequency ranges. An interesting property of phononic crystals and elastic metamaterials is the presence of band gaps, which are bands of frequencies where elastic waves cannot propagate. The presence of band gaps gives this class of materials the potential to be used as vibration isolators. In many studies, the band structures were used to evaluate the band gaps. The presence of band gaps in a finite structure is commonly validated by measuring the frequency response as there are no direct methods of measuring the band structures. In this study, an experiment was conducted to determine the band structure of one dimension phononic crystals with two wave modes, such as a bi-material beam, using the frequency response at only 6 points to validate the wave superposition method (WSM) introduced in a previous study. A bi-material beam and an aluminium beam with varying geometry were studied. The experiment was performed by hanging the beams freely, exciting one end of the beams, and measuring the acceleration at consecutive unit cells. The measured transfer function of the beams agrees with the analytical solutions but minor discrepancies. The band structure was then determined using WSM and the band structure of one set of the waves was found to agree well with the analytical solutions. The measurements taken for the other set of waves, which are the evanescent waves in the bi-material beams, were inaccurate and noisy. The transfer functions at additional points of one of the beams were calculated from the measured band structure using WSM. The calculated transfer function agrees with the measured results except at the frequencies where the band structure was inaccurate. Lastly, a study of the potential sources of errors was also conducted using finite element modelling and the errors in

  7. Principles of electromigration measurements

    International Nuclear Information System (INIS)

    Roesch, F.

    1988-01-01

    Basing on experimental applications of a modified version of on line electromigration measurements of γ-emitting radionuclides in homogeneous aqueous electrolytes free of supporting materials conceptions on calculation of stoichiometric and thermodynamic stability constants are carried out. Normalized ion mobilities were discussed, reflecting changes of the overall ion mobility of the radioelement in its equilibrium reaction in respect to the individual ion mobilities of the central ion at identic electrolyte parameters (temperature, overall ionic strength). With model reactions as well as with complex formations of Tl(I) with bromide and sulfate, respectively, examples of practical realizations of the conceptions are shown. (author)

  8. Active measurement-based quantum feedback for preparing and stabilizing superpositions of two cavity photon number states

    Science.gov (United States)

    Berube-Lauziere, Yves

    The measurement-based quantum feedback scheme developed and implemented by Haroche and collaborators to actively prepare and stabilize specific photon number states in cavity quantum electrodynamics (CQED) is a milestone achievement in the active protection of quantum states from decoherence. This feat was achieved by injecting, after each weak dispersive measurement of the cavity state via Rydberg atoms serving as cavity sensors, a low average number classical field (coherent state) to steer the cavity towards the targeted number state. This talk will present the generalization of the theory developed for targeting number states in order to prepare and stabilize desired superpositions of two cavity photon number states. Results from realistic simulations taking into account decoherence and imperfections in a CQED set-up will be presented. These demonstrate the validity of the generalized theory and points to the experimental feasibility of preparing and stabilizing such superpositions. This is a further step towards the active protection of more complex quantum states than number states. This work, cast in the context of CQED, is also almost readily applicable to circuit QED. YBL acknowledges financial support from the Institut Quantique through a Canada First Research Excellence Fund.

  9. Intercomparison of the GOS approach, superposition T-matrix method, and laboratory measurements for black carbon optical properties during aging

    International Nuclear Information System (INIS)

    He, Cenlin; Takano, Yoshi; Liou, Kuo-Nan; Yang, Ping; Li, Qinbin; Mackowski, Daniel W.

    2016-01-01

    We perform a comprehensive intercomparison of the geometric-optics surface-wave (GOS) approach, the superposition T-matrix method, and laboratory measurements for optical properties of fresh and coated/aged black carbon (BC) particles with complex structures. GOS and T-matrix calculations capture the measured optical (i.e., extinction, absorption, and scattering) cross sections of fresh BC aggregates, with 5–20% differences depending on particle size. We find that the T-matrix results tend to be lower than the measurements, due to uncertainty in theoretical approximations of realistic BC structures, particle property measurements, and numerical computations in the method. On the contrary, the GOS results are higher than the measurements (hence the T-matrix results) for BC radii 100 nm. We find good agreement (differences 100 nm. We find small deviations (≤10%) in asymmetry factors computed from the two methods for most BC coating structures and sizes, but several complex structures have 10–30% differences. This study provides the foundation for downstream application of the GOS approach in radiative transfer and climate studies. - Highlights: • The GOS and T-matrix methods capture laboratory measurements of BC optical properties. • The GOS results are consistent with the T-matrix results for BC optical properties. • BC optical properties vary remarkably with coating structures and sizes during aging.

  10. Linear superposition solutions to nonlinear wave equations

    International Nuclear Information System (INIS)

    Liu Yu

    2012-01-01

    The solutions to a linear wave equation can satisfy the principle of superposition, i.e., the linear superposition of two or more known solutions is still a solution of the linear wave equation. We show in this article that many nonlinear wave equations possess exact traveling wave solutions involving hyperbolic, triangle, and exponential functions, and the suitable linear combinations of these known solutions can also constitute linear superposition solutions to some nonlinear wave equations with special structural characteristics. The linear superposition solutions to the generalized KdV equation K(2,2,1), the Oliver water wave equation, and the k(n, n) equation are given. The structure characteristic of the nonlinear wave equations having linear superposition solutions is analyzed, and the reason why the solutions with the forms of hyperbolic, triangle, and exponential functions can form the linear superposition solutions is also discussed

  11. Risk measurement with equivalent utility principles

    NARCIS (Netherlands)

    Denuit, M.; Dhaene, J.; Goovaerts, M.; Kaas, R.; Laeven, R.

    2006-01-01

    Risk measures have been studied for several decades in the actuarial literature, where they appeared under the guise of premium calculation principles. Risk measures and properties that risk measures should satisfy have recently received considerable attention in the financial mathematics

  12. Radioactivity measurements principles and practice

    CERN Document Server

    Mann, W B; Spernol, A

    2012-01-01

    The authors have addressed the basic need for internationally consistent standards and methods demanded by the new and increasing use of radioactive materials, radiopharmaceuticals and labelled compounds. Particular emphasis is given to the basic and practical problems that may be encountered in measuring radioactivity. The text provides information and recommendations in the areas of radiation protection, focusing on quality control and the precautions necessary for the preparation and handling of radioactive substances. New information is also presented on the applications of both traditiona

  13. Decision principles derived from risk measures

    NARCIS (Netherlands)

    Goovaerts, M.J.; Kaas, R.; Laeven, R.J.A.

    2010-01-01

    In this paper, we argue that a distinction exists between risk measures and decision principles. Though both are functionals assigning a real number to a random variable, we think there is a hierarchy between the two concepts. Risk measures operate on the first "level", quantifying the risk in the

  14. Chaos and Complexities Theories. Superposition and Standardized Testing: Are We Coming or Going?

    Science.gov (United States)

    Erwin, Susan

    2005-01-01

    The purpose of this paper is to explore the possibility of using the principle of "superposition of states" (commonly illustrated by Schrodinger's Cat experiment) to understand the process of using standardized testing to measure a student's learning. Comparisons from literature, neuroscience, and Schema Theory will be used to expound upon the…

  15. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  16. Measurement of the quantum superposition state of an imaging ensemble of photons prepared in orbital angular momentum states using a phase-diversity method

    International Nuclear Information System (INIS)

    Uribe-Patarroyo, Nestor; Alvarez-Herrero, Alberto; Belenguer, Tomas

    2010-01-01

    We propose the use of a phase-diversity technique to estimate the orbital angular momentum (OAM) superposition state of an ensemble of photons that passes through an optical system, proceeding from an extended object. The phase-diversity technique permits the estimation of the optical transfer function (OTF) of an imaging optical system. As the OTF is derived directly from the wave-front characteristics of the observed light, we redefine the phase-diversity technique in terms of a superposition of OAM states. We test this new technique experimentally and find coherent results among different tests, which gives us confidence in the estimation of the photon ensemble state. We find that this technique not only allows us to estimate the square of the amplitude of each OAM state, but also the relative phases among all states, thus providing complete information about the quantum state of the photons. This technique could be used to measure the OAM spectrum of extended objects in astronomy or in an optical communication scheme using OAM states. In this sense, the use of extended images could lead to new techniques in which the communication is further multiplexed along the field.

  17. The measure and significance of Bateman's principles.

    Science.gov (United States)

    Collet, Julie M; Dean, Rebecca F; Worley, Kirsty; Richardson, David S; Pizzari, Tommaso

    2014-05-07

    Bateman's principles explain sex roles and sexual dimorphism through sex-specific variance in mating success, reproductive success and their relationships within sexes (Bateman gradients). Empirical tests of these principles, however, have come under intense scrutiny. Here, we experimentally show that in replicate groups of red junglefowl, Gallus gallus, mating and reproductive successes were more variable in males than in females, resulting in a steeper male Bateman gradient, consistent with Bateman's principles. However, we use novel quantitative techniques to reveal that current methods typically overestimate Bateman's principles because they (i) infer mating success indirectly from offspring parentage, and thus miss matings that fail to result in fertilization, and (ii) measure Bateman gradients through the univariate regression of reproductive over mating success, without considering the substantial influence of other components of male reproductive success, namely female fecundity and paternity share. We also find a significant female Bateman gradient but show that this likely emerges as spurious consequences of male preference for fecund females, emphasizing the need for experimental approaches to establish the causal relationship between reproductive and mating success. While providing qualitative support for Bateman's principles, our study demonstrates how current approaches can generate a misleading view of sex differences and roles.

  18. Engineering mesoscopic superpositions of superfluid flow

    International Nuclear Information System (INIS)

    Hallwood, D. W.; Brand, J.

    2011-01-01

    Modeling strongly correlated atoms demonstrates the possibility to prepare quantum superpositions that are robust against experimental imperfections and temperature. Such superpositions of vortex states are formed by adiabatic manipulation of interacting ultracold atoms confined to a one-dimensional ring trapping potential when stirred by a barrier. Here, we discuss the influence of nonideal experimental procedures and finite temperature. Adiabaticity conditions for changing the stirring rate reveal that superpositions of many atoms are most easily accessed in the strongly interacting, Tonks-Girardeau, regime, which is also the most robust at finite temperature. NOON-type superpositions of weakly interacting atoms are most easily created by adiabatically decreasing the interaction strength by means of a Feshbach resonance. The quantum dynamics of small numbers of particles is simulated and the size of the superpositions is calculated based on their ability to make precision measurements. The experimental creation of strongly correlated and NOON-type superpositions with about 100 atoms seems feasible in the near future.

  19. The action uncertainty principle for continuous measurements

    Science.gov (United States)

    Mensky, Michael B.

    1996-02-01

    The action uncertainty principle (AUP) for the specification of the most probable readouts of continuous quantum measurements is proved, formulated in different forms and analyzed (for nonlinear as well as linear systems). Continuous monitoring of an observable A(p,q,t) with resolution Δa( t) is considered. The influence of the measurement process on the evolution of the measured system (quantum measurement noise) is presented by an additional term δ F(t)A(p,q,t) in the Hamiltonian where the function δ F (generalized fictitious force) is restricted by the AUP ∫|δ F(t)| Δa( t) d t ≲ and arbitrary otherwise. Quantum-nondemolition (QND) measurements are analyzed with the help of the AUP. A simple uncertainty relation for continuous quantum measurements is derived. It states that the area of a certain band in the phase space should be of the order of. The width of the band depends on the measurement resolution while its length is determined by the deviation of the system, due to the measurement, from classical behavior.

  20. The action uncertainty principle for continuous measurements

    International Nuclear Information System (INIS)

    Mensky, M.B.

    1996-01-01

    The action uncertainty principle (AUP) for the specification of the most probable readouts of continuous quantum measurements is proved, formulated in different forms and analyzed (for nonlinear as well as linear systems). Continuous monitoring of an observable A(p,q,t) with resolution Δa(t) is considered. The influence of the measurement process on the evolution of the measured system (quantum measurement noise) is presented by an additional term δF(t) A(p,q,t) in the Hamiltonian where the function δF (generalized fictitious force) is restricted by the AUP ∫ vertical stroke δF(t) vertical stroke Δa(t)d t< or∼ℎ and arbitrary otherwise. Quantum-nondemolition (QND) measurements are analyzed with the help of the AUP. A simple uncertainty relation for continuous quantum measurements is derived. It states that the area of a certain band in the phase space should be of the order of ℎ. The width of the band depends on the measurement resolution while its length is determined by the deviation of the system, due to the measurement, from classical behavior. (orig.)

  1. Effects of Heat-Treated Wood Particles on the Physico-Mechanical Properties and Extended Creep Behavior of Wood/Recycled-HDPE Composites Using the Time–Temperature Superposition Principle

    Directory of Open Access Journals (Sweden)

    Teng-Chun Yang

    2017-03-01

    Full Text Available This study investigated the effectiveness of heat-treated wood particles for improving the physico-mechanical properties and creep performance of wood/recycled-HDPE composites. The results reveal that the composites with heat-treated wood particles had significantly decreased moisture content, water absorption, and thickness swelling, while no improvements of the flexural properties or the wood screw holding strength were observed, except for the internal bond strength. Additionally, creep tests were conducted at a series of elevated temperatures using the time–temperature superposition principle (TTSP, and the TTSP-predicted creep compliance curves fit well with the experimental data. The creep resistance values of composites with heat-treated wood particles were greater than those having untreated wood particles due to the hydrophobic character of the treated wood particles and improved interfacial compatibility between the wood particles and polymer matrix. At a reference temperature of 20 °C, the improvement of creep resistance (ICR of composites with heat-treated wood particles reached approximately 30% over a 30-year period, and it increased significantly with increasing reference temperature.

  2. Effects of Heat-Treated Wood Particles on the Physico-Mechanical Properties and Extended Creep Behavior of Wood/Recycled-HDPE Composites Using the Time–Temperature Superposition Principle

    Science.gov (United States)

    Yang, Teng-Chun; Chien, Yi-Chi; Wu, Tung-Lin; Hung, Ke-Chang; Wu, Jyh-Horng

    2017-01-01

    This study investigated the effectiveness of heat-treated wood particles for improving the physico-mechanical properties and creep performance of wood/recycled-HDPE composites. The results reveal that the composites with heat-treated wood particles had significantly decreased moisture content, water absorption, and thickness swelling, while no improvements of the flexural properties or the wood screw holding strength were observed, except for the internal bond strength. Additionally, creep tests were conducted at a series of elevated temperatures using the time–temperature superposition principle (TTSP), and the TTSP-predicted creep compliance curves fit well with the experimental data. The creep resistance values of composites with heat-treated wood particles were greater than those having untreated wood particles due to the hydrophobic character of the treated wood particles and improved interfacial compatibility between the wood particles and polymer matrix. At a reference temperature of 20 °C, the improvement of creep resistance (ICR) of composites with heat-treated wood particles reached approximately 30% over a 30-year period, and it increased significantly with increasing reference temperature. PMID:28772726

  3. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  4. Principle of coincidence method and application in activity measurement

    International Nuclear Information System (INIS)

    Li Mou; Dai Yihua; Ni Jianzhong

    2008-01-01

    The basic principle of coincidence method was discussed. The basic principle was generalized by analysing the actual example, and the condition in theory of coincidence method was brought forward. The cause of variation of efficiency curve and the effect of dead-time in activity measurement were explained using the above principle and condition. This principle of coincidence method provides the foundation in theory for activity measurement. (authors)

  5. Lasers: principles, applications and energetic measures

    International Nuclear Information System (INIS)

    Subran, C.; Sagaut, J.; Lapointe, S.

    2009-01-01

    After having recalled the principles of a laser and the properties of the laser beam, the authors describe the following different types of lasers: solid state lasers, fiber lasers, semiconductor lasers, dye lasers and gas lasers. Then, their applications are given. Very high energy lasers can reproduce the phenomenon of nuclear fusion of hydrogen atoms. (O.M.)

  6. The principles of measuring forest fire danger

    Science.gov (United States)

    H. T. Gisborne

    1936-01-01

    Research in fire danger measurement was commenced in 1922 at the Northern Rocky Mountain Forest and Range Experiment Station of the U. S. Forest Service, with headquarters at Missoula, Mont. Since then investigations have been made concerning ( 1) what to measure, (2) how to measure, and ( 3) field use of these measurements. In all cases the laboratory or restricted...

  7. PRINCIPLES OF THE SUPPLY CHAIN PERFORMANCE MEASUREMENT

    OpenAIRE

    BEATA ŒLUSARCZYK; SEBASTIAN KOT

    2012-01-01

    Measurement of performance in every business management is a crucial activity allowing for effectiveness increase. The lack of suitable performance measurement is especially noticed in complex systems as supply chains. Responsible persons cannot manage effectively without suitable set of measures those are base for comparison to previous data or effects of other supply chain functioning. The analysis shows that it is very hard to find balanced set of supply chain performance measures those sh...

  8. Basic principles for measurement of intramuscular pressure

    Science.gov (United States)

    Hargens, A. R.; Ballard, R. E.

    1995-01-01

    We review historical and methodological approaches to measurements of intramuscular pressure (IMP) in humans. These techniques provide valuable measures of muscle tone and activity as well as diagnostic criteria for evaluation of exertional compartment syndrome. Although the wick and catheter techniques provide accurate measurements of IMP at rest, their value for exercise studies and diagnosis of exertional compartment syndrome is limited because of low frequency response and hydrostatic (static and inertial) pressure artifacts. Presently, most information on diagnosis of exertional compartment syndromes during dynamic exercise is available using the Myopress catheter. However, future research and clinical diagnosis using IMP can be optimized by the use of a miniature transducer-tipped catheter such as the Millar Mikro-tip.

  9. Superposition Attacks on Cryptographic Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Funder, Jakob Løvstad; Nielsen, Jesper Buus

    2011-01-01

    of information. In this paper, we introduce a fundamentally new model of quantum attacks on classical cryptographic protocols, where the adversary is allowed to ask several classical queries in quantum superposition. This is a strictly stronger attack than the standard one, and we consider the security......Attacks on classical cryptographic protocols are usually modeled by allowing an adversary to ask queries from an oracle. Security is then defined by requiring that as long as the queries satisfy some constraint, there is some problem the adversary cannot solve, such as compute a certain piece...... of several primitives in this model. We show that a secret-sharing scheme that is secure with threshold $t$ in the standard model is secure against superposition attacks if and only if the threshold is lowered to $t/2$. We use this result to give zero-knowledge proofs for all of NP in the common reference...

  10. A comparison of two different sound intensity measurement principles

    DEFF Research Database (Denmark)

    Jacobsen, Finn; de Bree, Hans-Elias

    2005-01-01

    , and compares the two measurement principles with particular regard to the sources of error in sound power determination. It is shown that the phase calibration of intensity probes that combine different transducers is very critical below 500 Hz if the measurement surface is very close to the source under test...

  11. Universal uncertainty principle in the measurement operator formalism

    International Nuclear Information System (INIS)

    Ozawa, Masanao

    2005-01-01

    Heisenberg's uncertainty principle has been understood to set a limitation on measurements; however, the long-standing mathematical formulation established by Heisenberg, Kennard, and Robertson does not allow such an interpretation. Recently, a new relation was found to give a universally valid relation between noise and disturbance in general quantum measurements, and it has become clear that the new relation plays a role of the first principle to derive various quantum limits on measurement and information processing in a unified treatment. This paper examines the above development on the noise-disturbance uncertainty principle in the model-independent approach based on the measurement operator formalism, which is widely accepted to describe a class of generalized measurements in the field of quantum information. We obtain explicit formulae for the noise and disturbance of measurements given by measurement operators, and show that projective measurements do not satisfy the Heisenberg-type noise-disturbance relation that is typical in the gamma-ray microscope thought experiments. We also show that the disturbance on a Pauli operator of a projective measurement of another Pauli operator constantly equals √2, and examine how this measurement violates the Heisenberg-type relation but satisfies the new noise-disturbance relation

  12. Interplay of gravitation and linear superposition of different mass eigenstates

    International Nuclear Information System (INIS)

    Ahluwalia, D.V.

    1998-01-01

    The interplay of gravitation and the quantum-mechanical principle of linear superposition induces a new set of neutrino oscillation phases. These ensure that the flavor-oscillation clocks, inherent in the phenomenon of neutrino oscillations, redshift precisely as required by Einstein close-quote s theory of gravitation. The physical observability of these phases in the context of the solar neutrino anomaly, type-II supernova, and certain atomic systems is briefly discussed. copyright 1998 The American Physical Society

  13. Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space

    Science.gov (United States)

    Volkoff, T. J.; Whaley, K. B.

    2014-12-01

    We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.

  14. Superposition as a logical glue

    Directory of Open Access Journals (Sweden)

    Andrea Asperti

    2011-03-01

    Full Text Available The typical mathematical language systematically exploits notational and logical abuses whose resolution requires not just the knowledge of domain specific notation and conventions, but not trivial skills in the given mathematical discipline. A large part of this background knowledge is expressed in form of equalities and isomorphisms, allowing mathematicians to freely move between different incarnations of the same entity without even mentioning the transformation. Providing ITP-systems with similar capabilities seems to be a major way to improve their intelligence, and to ease the communication between the user and the machine. The present paper discusses our experience of integration of a superposition calculus within the Matita interactive prover, providing in particular a very flexible, "smart" application tactic, and a simple, innovative approach to automation.

  15. Principles in selecting human capital measurements and metrics

    Directory of Open Access Journals (Sweden)

    Pharny D. Chrysler-Fox

    2014-09-01

    Research purpose: The study explored principles in selecting human capital measurements,drawing on the views and recommendations of human resource management professionals,all experts in human capital measurement. Motivation for the study: The motivation was to advance the understanding of selectingappropriate and strategic valid measurements, in order for human resource practitioners tocontribute to creating value and driving strategic change. Research design, approach and method: A qualitative approach, with purposively selectedcases from a selected panel of human capital measurement experts, generated a datasetthrough unstructured interviews, which were analysed thematically. Main findings: Nineteen themes were found. They represent a process that considers thecentrality of the business strategy and a systemic integration across multiple value chains inthe organisation through business partnering, in order to select measurements and generatemanagement level-appropriate information. Practical/managerial implications: Measurement practitioners, in partnership withmanagement from other functions, should integrate the business strategy across multiplevalue chains in order to select measurements. Analytics becomes critical in discoveringrelationships and formulating hypotheses to understand value creation. Higher educationinstitutions should produce graduates able to deal with systems thinking and to operatewithin complexity. Contribution: This study identified principles to select measurements and metrics. Noticeableis the move away from the interrelated scorecard perspectives to a systemic view of theorganisation in order to understand value creation. In addition, the findings may help toposition the human resource management function as a strategic asset.

  16. Continuous quantum measurements and the action uncertainty principle

    Science.gov (United States)

    Mensky, Michael B.

    1992-09-01

    The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.

  17. The superposition of the states and the logic approach to quantum mechanics

    International Nuclear Information System (INIS)

    Zecca, A.

    1981-01-01

    An axiomatic approach to quantum mechanics is proposed in terms of a 'logic' scheme satisfying a suitable set of axioms. In this context the notion of pure, maximal, and characteristic state as well as the superposition relation and the superposition principle for the states are studied. The role the superposition relation plays in the reversible and in the irreversible dynamics is investigated and its connection with the tensor product is studied. Throughout the paper, the W*-algebra model, is used to exemplify results and properties of the general scheme. (author)

  18. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    Science.gov (United States)

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  19. A sensitive dynamic viscometer for measuring the complex shear modulus in a steady shear flow using the method of orthogonal superposition

    NARCIS (Netherlands)

    Zeegers, J.C.H.; Zeegers, Jos; van den Ende, Henricus T.M.; Blom, C.; Altena, E.G.; Beukema, Gerrit J.; Beukema, G.J.; Mellema, J.

    1995-01-01

    A new instrument to carry out complex viscosity measurements in equilibrium and in a steady shear flow has been developed. A small amplitude harmonic excitation is superimposed orthogonally to the steady shear rate component. It is realized by a thin-walled cylinder, which oscillates in the axial

  20. Macroscopic superposition states and decoherence by quantum telegraph noise

    Energy Technology Data Exchange (ETDEWEB)

    Abel, Benjamin Simon

    2008-12-19

    In the first part of the present thesis we address the question about the size of superpositions of macroscopically distinct quantum states. We propose a measure for the ''size'' of a Schroedinger cat state, i.e. a quantum superposition of two many-body states with (supposedly) macroscopically distinct properties, by counting how many single-particle operations are needed to map one state onto the other. We apply our measure to a superconducting three-junction flux qubit put into a superposition of clockwise and counterclockwise circulating supercurrent states and find this Schroedinger cat to be surprisingly small. The unavoidable coupling of any quantum system to many environmental degrees of freedom leads to an irreversible loss of information about an initially prepared superposition of quantum states. This phenomenon, commonly referred to as decoherence or dephasing, is the subject of the second part of the thesis. We have studied the time evolution of the reduced density matrix of a two-level system (qubit) subject to quantum telegraph noise which is the major source of decoherence in Josephson charge qubits. We are able to derive an exact expression for the time evolution of the reduced density matrix. (orig.)

  1. Macroscopic superposition states and decoherence by quantum telegraph noise

    International Nuclear Information System (INIS)

    Abel, Benjamin Simon

    2008-01-01

    In the first part of the present thesis we address the question about the size of superpositions of macroscopically distinct quantum states. We propose a measure for the ''size'' of a Schroedinger cat state, i.e. a quantum superposition of two many-body states with (supposedly) macroscopically distinct properties, by counting how many single-particle operations are needed to map one state onto the other. We apply our measure to a superconducting three-junction flux qubit put into a superposition of clockwise and counterclockwise circulating supercurrent states and find this Schroedinger cat to be surprisingly small. The unavoidable coupling of any quantum system to many environmental degrees of freedom leads to an irreversible loss of information about an initially prepared superposition of quantum states. This phenomenon, commonly referred to as decoherence or dephasing, is the subject of the second part of the thesis. We have studied the time evolution of the reduced density matrix of a two-level system (qubit) subject to quantum telegraph noise which is the major source of decoherence in Josephson charge qubits. We are able to derive an exact expression for the time evolution of the reduced density matrix. (orig.)

  2. Structural Technology Evaluation and Analysis Program (STEAP) Delivery Order 0042: Development of the Equivalent Overload Model, Demonstration of the Failure of Superposition, and Relaxation/Redistribution Measurement

    Science.gov (United States)

    2011-09-01

    life benefits of the cold-worked process. A simple empirically -derived scale factor would accomplish the objectives 2 Approved for public release...Correlation of Retro-dictions to Measured Fatigue Lives, Naturally Occurring Cracks resK .001402.0 inca == 69 Approved for public release; distribution...allow significant variations in the cold work level from 3 to 5% for a nominal 4% cold worked hole). The direct use of empirically -derived scale

  3. Decoherence of superposition states in trapped ions

    CSIR Research Space (South Africa)

    Uys, H

    2010-09-01

    Full Text Available This paper investigates the decoherence of superpositions of hyperfine states of 9Be+ ions due to spontaneous scattering of off-resonant light. It was found that, contrary to conventional wisdom, elastic Raleigh scattering can have major...

  4. Reducing Uncertainty: Implementation of Heisenberg Principle to Measure Company Performance

    Directory of Open Access Journals (Sweden)

    Anna Svirina

    2015-08-01

    Full Text Available The paper addresses the problem of uncertainty reduction in estimation of future company performance, which is a result of wide range of enterprise's intangible assets probable efficiency. To reduce this problem, the paper suggests to use quantum economy principles, i.e. implementation of Heisenberg principle to measure efficiency and potential of intangible assets of the company. It is proposed that for intangibles it is not possible to estimate both potential and efficiency at a certain time point. To provide a proof for these thesis, the data on resources potential and efficiency from mid-Russian companies was evaluated within deterministic approach, which did not allow to evaluate probability of achieving certain resource efficiency, and quantum approach, which allowed to estimate the central point around which the probable efficiency of resources in concentrated. Visualization of these approaches was performed by means of LabView software. It was proven that for tangible assets performance estimation a deterministic approach should be used; while for intangible assets the quantum approach allows better quality of future performance prediction. On the basis of these findings we proposed the holistic approach towards estimation of company resource efficiency in order to reduce uncertainty in modeling company performance.

  5. Physical principles of thermoluminescence and recent developments in its measurement

    International Nuclear Information System (INIS)

    Levy, P.W.

    1974-01-01

    The physical principles which are the basis of thermoluminescence techniques for dating and authenticating archaeological and fine art objects are described in non-technical terms. Included is a discussion of the interaction of alpha particles, beta rays, i.e., energetic electrons, and gamma rays with solids, particularly electron-hole ion pair formation, and the trapping of charges by crystal imperfections. Also described is the charge-release process induced by heating and the accompanying emission of luminescence resulting from charge recombination and retrapping. The basic procedure for dating and/or authenticating an artifact is described in a ''how it is done'' manner. Lastly, recently developed apparatus is described for simultaneously measuring luminescent light intensity and wavelength and sample temperature. Examples of studies made with this ''3-D'' apparatus are given and applications to dating and authenticating are described. (U.S.)

  6. Principles of the measurement of residual stress by neutron diffraction

    Energy Technology Data Exchange (ETDEWEB)

    Webster, G A; Ezeilo, A N [Imperial Coll. of Science and Technology, London (United Kingdom). Dept. of Mechanical Engineering

    1996-11-01

    The presence of residual stresses in engineering components can significantly affect their load carrying capacity and resistance to fracture. In order to quantify their effect it is necessary to know their magnitude and distribution. Neutron diffraction is the most suitable method of obtaining these stresses non-destructively in the interior of components. In this paper the principles of the technique are described. A monochromatic beam of neutrons, or time of flight measurements, can be employed. In each case, components of strain are determined directly from changes in the lattice spacings between crystals. Residual stresses can then be calculated from these strains. The experimental procedures for making the measurements are described and precautions for achieving reliable results discussed. These include choice of crystal planes on which to make measurements, extent of masking needed to identify a suitable sampling volume, type of detector and alignment procedure. Methods of achieving a stress free reference are also considered. A selection of practical examples is included to demonstrate the success of the technique. (author) 14 figs., 1 tab., 18 refs.

  7. Principles of the measurement of residual stress by neutron diffraction

    International Nuclear Information System (INIS)

    Webster, G.A.; Ezeilo, A.N.

    1996-01-01

    The presence of residual stresses in engineering components can significantly affect their load carrying capacity and resistance to fracture. In order to quantify their effect it is necessary to know their magnitude and distribution. Neutron diffraction is the most suitable method of obtaining these stresses non-destructively in the interior of components. In this paper the principles of the technique are described. A monochromatic beam of neutrons, or time of flight measurements, can be employed. In each case, components of strain are determined directly from changes in the lattice spacings between crystals. Residual stresses can then be calculated from these strains. The experimental procedures for making the measurements are described and precautions for achieving reliable results discussed. These include choice of crystal planes on which to make measurements, extent of masking needed to identify a suitable sampling volume, type of detector and alignment procedure. Methods of achieving a stress free reference are also considered. A selection of practical examples is included to demonstrate the success of the technique. (author) 14 figs., 1 tab., 18 refs

  8. Exclusion of identification by negative superposition

    Directory of Open Access Journals (Sweden)

    Takač Šandor

    2012-01-01

    Full Text Available The paper represents the first report of negative superposition in our country. Photo of randomly selected young, living woman was superimposed on the previously discovered female skull. Computer program Adobe Photoshop 7.0 was used in work. Digitilized photographs of the skull and face, after uploaded to computer, were superimposed on each other and displayed on the monitor in order to assess their possible similarities or differences. Special attention was payed to matching the same anthropometrical points of the skull and face, as well as following their contours. The process of fitting the skull and the photograph is usually started by setting eyes in correct position relative to the orbits. In this case, lower jaw gonions go beyond the face contour and gnathion is highly placed. By positioning the chin, mouth and nose their correct anatomical position cannot be achieved. All the difficulties associated with the superposition were recorded, with special emphasis on critical evaluation of work results in a negative superposition. Negative superposition has greater probative value (exclusion of identification than positive (possible identification. 100% negative superposition is easily achieved, but 100% positive - almost never. 'Each skull is unique and viewed from different perspectives is always a new challenge'. From this point of view, identification can be negative or of high probability.

  9. Environmental policy in brown coal mining in accordance with the precautionary measures principle and polluter pays principle

    International Nuclear Information System (INIS)

    Hamann, R.; Wacker, H.

    1993-01-01

    The precautionary measures principle and the polluter pays principle in brown coal mining are discussed. Ground water subsidence and landscape destruction are local or regional problems and thus easily detectable. If damage cannot be avoided, its authors are known and will pay. In spite of all this, the German brown coal industry is well able to compete on the world market with others who don't care about the environmental damage they may cause. (orig./HS)) [de

  10. Measurement of carotid bifurcation pressure gradients using the Bernoulli principle.

    Science.gov (United States)

    Illig, K A; Ouriel, K; DeWeese, J A; Holen, J; Green, R M

    1996-04-01

    Current randomized prospective studies suggest that the degree of carotid stenosis is a critical element in deciding whether surgical or medical treatment is appropriate. Of potential interest is the actual pressure drop caused by the blockage, but no direct non-invasive means of quantifying the hemodynamic consequences of carotid artery stenoses currently exists. The present prospective study examined whether preoperative pulsed-Doppler duplex ultrasonographic velocity (v) measurements could be used to predict pressure gradients (delta P) caused by carotid artery stenoses, and whether such measurements could be used to predict angiographic percent diameter reduction. Preoperative Doppler velocity and intraoperative direct pressure measurements were obtained, and per cent diameter angiographic stenosis measured in 76 consecutive patients who underwent 77 elective carotid endarterectomies. Using the Bernoulli principle (delta P = 4v(2), pressure gradients across the stenoses were calculated. The predicted delta P, as well as absolute velocities and internal carotid artery/common carotid velocity ratios were compared with the actual delta P measured intraoperatively and with preoperative angiography and oculopneumoplethysmography (OPG) results. An end-diastolic velocity of > or = 1 m/s and an end-diastolic internal carotid artery/common carotid artery velocity ratio of > or = 10 predicted a 50% diameter angiographic stenosis with 100% specificity. Although statistical significance was reached, preoperative pressure gradients derived from the Bernoulli equation could not predict actual individual intraoperative pressure gradients with enough accuracy to allow decision making on an individual basis. Velocity measurements were as specific and more sensitive than OPG results. Delta P as predicted by the Bernoulli equation is not sufficiently accurate at the carotid bifurcation to be useful for clinical decision making on an individual basis. However, end

  11. Experimental superposition of orders of quantum gates

    Science.gov (United States)

    Procopio, Lorenzo M.; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G.; Hamel, Deny R.; Rozema, Lee A.; Brukner, Časlav; Walther, Philip

    2015-01-01

    Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to ‘superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task—determining if two gates commute or anti-commute—with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107

  12. Generation of picosecond pulsed coherent state superpositions

    DEFF Research Database (Denmark)

    Dong, Ruifang; Tipsmark, Anders; Laghaout, Amine

    2014-01-01

    We present the generation of approximated coherent state superpositions-referred to as Schrodinger cat states-by the process of subtracting single photons from picosecond pulsed squeezed states of light. The squeezed vacuum states are produced by spontaneous parametric down-conversion (SPDC...... which exhibit non-Gaussian behavior. (C) 2014 Optical Society of America...

  13. Single-Atom Gating of Quantum State Superpositions

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Christopher

    2010-04-28

    The ultimate miniaturization of electronic devices will likely require local and coherent control of single electronic wavefunctions. Wavefunctions exist within both physical real space and an abstract state space with a simple geometric interpretation: this state space - or Hilbert space - is spanned by mutually orthogonal state vectors corresponding to the quantized degrees of freedom of the real-space system. Measurement of superpositions is akin to accessing the direction of a vector in Hilbert space, determining an angle of rotation equivalent to quantum phase. Here we show that an individual atom inside a designed quantum corral1 can control this angle, producing arbitrary coherent superpositions of spatial quantum states. Using scanning tunnelling microscopy and nanostructures assembled atom-by-atom we demonstrate how single spins and quantum mirages can be harnessed to image the superposition of two electronic states. We also present a straightforward method to determine the atom path enacting phase rotations between any desired state vectors. A single atom thus becomes a real-space handle for an abstract Hilbert space, providing a simple technique for coherent quantum state manipulation at the spatial limit of condensed matter.

  14. Acceleration Measurements Using Smartphone Sensors: Dealing with the Equivalence Principle

    OpenAIRE

    Monteiro, Martín; Cabeza, Cecilia; Martí, Arturo C.

    2014-01-01

    Acceleration sensors built into smartphones, i-pads or tablets can conveniently be used in the physics laboratory. By virtue of the equivalence principle, a sensor fixed in a non-inertial reference frame cannot discern between a gravitational field and an accelerated system. Accordingly, acceleration values read by these sensors must be corrected for the gravitational component. A physical pendulum was studied by way of example, and absolute acceleration and rotation angle values were derived...

  15. Toward quantum superposition of living organisms

    International Nuclear Information System (INIS)

    Romero-Isart, Oriol; Cirac, J Ignacio; Juan, Mathieu L; Quidant, Romain

    2010-01-01

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deleglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6; Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schroedinger's cat 'gedanken' paradigm (Schroedinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  16. Toward quantum superposition of living organisms

    Energy Technology Data Exchange (ETDEWEB)

    Romero-Isart, Oriol; Cirac, J Ignacio [Max-Planck-Institut fuer Quantenoptik, Hans-Kopfermann-Strasse 1, D-85748, Garching (Germany); Juan, Mathieu L; Quidant, Romain [ICFO-Institut de Ciencies Fotoniques, Mediterranean Technology Park, Castelldefels, Barcelona 08860 (Spain)], E-mail: oriol.romero-isart@mpq.mpg.de

    2010-03-15

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deleglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6; Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schroedinger's cat 'gedanken' paradigm (Schroedinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  17. Thermography. Principles and measurements; Thermographie. Principes et mesure

    Energy Technology Data Exchange (ETDEWEB)

    Pajani, D. [Ecole Centrale de Lyon, 69 - Ecully (France)

    2001-09-01

    Thermography is a technique which allows to obtain the thermal image of a given scene and for a determined spectral domain. Infrared thermography is the most well-known and used technique of thermography, but this article deals with the thermographic measurements in general and for a wider part of the radiation spectrum: 1 - general considerations: terminology, fluxes and temperatures measurement; 2 - radiations (emission and reception), radiative properties of materials: basic notions, simplified radiometer, radiative properties of materials; 3 - thermographic measurements: general considerations, calibration, radiometric measurement situation, from the radiometric measurement to the thermometric measurement and to the thermographic measurement, measurement uncertainties. (J.S.)

  18. Quantum superposition of massive objects and collapse models

    International Nuclear Information System (INIS)

    Romero-Isart, Oriol

    2011-01-01

    We analyze the requirements to test some of the most paradigmatic collapse models with a protocol that prepares quantum superpositions of massive objects. This consists of coherently expanding the wave function of a ground-state-cooled mechanical resonator, performing a squared position measurement that acts as a double slit, and observing interference after further evolution. The analysis is performed in a general framework and takes into account only unavoidable sources of decoherence: blackbody radiation and scattering of environmental particles. We also discuss the limitations imposed by the experimental implementation of this protocol using cavity quantum optomechanics with levitating dielectric nanospheres.

  19. Quantum superposition of massive objects and collapse models

    Energy Technology Data Exchange (ETDEWEB)

    Romero-Isart, Oriol [Max-Planck-Institut fuer Quantenoptik, Hans-Kopfermann-Str. 1, D-85748 Garching (Germany)

    2011-11-15

    We analyze the requirements to test some of the most paradigmatic collapse models with a protocol that prepares quantum superpositions of massive objects. This consists of coherently expanding the wave function of a ground-state-cooled mechanical resonator, performing a squared position measurement that acts as a double slit, and observing interference after further evolution. The analysis is performed in a general framework and takes into account only unavoidable sources of decoherence: blackbody radiation and scattering of environmental particles. We also discuss the limitations imposed by the experimental implementation of this protocol using cavity quantum optomechanics with levitating dielectric nanospheres.

  20. Principles of planar near-field antenna measurements

    CERN Document Server

    Gregson, Stuart; Parini, Clive

    2007-01-01

    This single volume provides a comprehensive introduction and explanation of both the theory and practice of 'Planar Near-Field Antenna Measurement' from its basic postulates and assumptions, to the intricacies of its deployment in complex and demanding measurement scenarios.

  1. The four principles: can they be measured and do they predict ethical decision making?

    Science.gov (United States)

    Page, Katie

    2012-05-20

    The four principles of Beauchamp and Childress--autonomy, non-maleficence, beneficence and justice--have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed.

  2. Measurement Invariance: A Foundational Principle for Quantitative Theory Building

    Science.gov (United States)

    Nimon, Kim; Reio, Thomas G., Jr.

    2011-01-01

    This article describes why measurement invariance is a critical issue to quantitative theory building within the field of human resource development. Readers will learn what measurement invariance is and how to test for its presence using techniques that are accessible to applied researchers. Using data from a LibQUAL+[TM] study of user…

  3. THE DEVELOPMENT OF AN INSTRUMENT FOR MEASURING THE UNDERSTANDING OF PROFIT-MAXIMIZING PRINCIPLES.

    Science.gov (United States)

    MCCORMICK, FLOYD G.

    THE PURPOSE OF THE STUDY WAS TO DEVELOP AN INSTRUMENT FOR MEASURING PROFIT-MAXIMIZING PRINCIPLES IN FARM MANAGEMENT WITH IMPLICATIONS FOR VOCATIONAL AGRICULTURE. PRINCIPLES WERE IDENTIFIED FROM LITERATURE SELECTED BY AGRICULTURAL ECONOMISTS. FORTY-FIVE MULTIPLE-CHOICE QUESTIONS WERE REFINED ON THE BASIS OF RESULTS OF THREE PRETESTS AND…

  4. First principle active neutron coincidence counting measurements of uranium oxide

    Energy Technology Data Exchange (ETDEWEB)

    Goddard, Braden, E-mail: goddard.braden@gmail.com [Nuclear Security Science and Policy Institute, Texas A and M University, College Station, Texas 77843 (United States); Charlton, William [Nuclear Security Science and Policy Institute, Texas A and M University, College Station, Texas 77843 (United States); Peerani, Paolo [European Commission, EC-JRC-ITU, Ispra (Italy)

    2014-03-01

    Uranium is present in most nuclear fuel cycle facilities ranging from uranium mines, enrichment plants, fuel fabrication facilities, nuclear reactors, and reprocessing plants. The isotopic, chemical, and geometric composition of uranium can vary significantly between these facilities, depending on the application and type of facility. Examples of this variation are: enrichments varying from depleted (∼0.2 wt% {sup 235}U) to high enriched (>20 wt% {sup 235}U); compositions consisting of U{sub 3}O{sub 8}, UO{sub 2}, UF{sub 6}, metallic, and ceramic forms; geometries ranging from plates, cans, and rods; and masses which can range from a 500 kg fuel assembly down to a few grams fuel pellet. Since {sup 235}U is a fissile material, it is routinely safeguarded in these facilities. Current techniques for quantifying the {sup 235}U mass in a sample include neutron coincidence counting. One of the main disadvantages of this technique is that it requires a known standard of representative geometry and composition for calibration, which opens up a pathway for potential erroneous declarations by the State and reduces the effectiveness of safeguards. In order to address this weakness, the authors have developed a neutron coincidence counting technique which uses the first principle point-model developed by Boehnel instead of the “known standard” method. This technique was primarily tested through simulations of 1000 g U{sub 3}O{sub 8} samples using the Monte Carlo N-Particle eXtended (MCNPX) code. The results of these simulations showed good agreement between the simulated and exact {sup 235}U sample masses.

  5. Deterministic preparation of superpositions of vacuum plus one photon by adaptive homodyne detection: experimental considerations

    International Nuclear Information System (INIS)

    Pozza, Nicola Dalla; Wiseman, Howard M; Huntington, Elanor H

    2015-01-01

    The preparation stage of optical qubits is an essential task in all the experimental setups employed for the test and demonstration of quantum optics principles. We consider a deterministic protocol for the preparation of qubits as a superposition of vacuum and one photon number states, which has the advantage to reduce the amount of resources required via phase-sensitive measurements using a local oscillator (‘dyne detection’). We investigate the performances of the protocol using different phase measurement schemes: homodyne, heterodyne, and adaptive dyne detection (involving a feedback loop). First, we define a suitable figure of merit for the prepared state and we obtain an analytical expression for that in terms of the phase measurement considered. Further, we study limitations that the phase measurement can exhibit, such as delay or limited resources in the feedback strategy. Finally, we evaluate the figure of merit of the protocol for different mode-shapes handily available in an experimental setup. We show that even in the presence of such limitations simple feedback algorithms can perform surprisingly well, outperforming the protocols when simple homodyne or heterodyne schemes are employed. (paper)

  6. PRINCIPLE OF SKEW QUADRUPOLE MODULATION TO MEASURE BETATRON COUPLING

    International Nuclear Information System (INIS)

    LUO, Y.; PILAT, F.; ROSER, T.

    2004-01-01

    The measurement of the residual betatron coupling via skew quadrupole modulation is a new diagnostics technique that has been developed and tested at the Relativistic Heavy Ion Collider (RHIC) as a very promising method for the linear decoupling on the ramp. By modulating the strengths of different skew quadrupole families the two eigentunes are precisely measured with the phase lock loop system. The projections of the residual coupling coefficient onto the skew quadrupole coupling modulation directions are determined. The residual linear coupling could be corrected according to the measurement. An analytical solution for skew quadrupole modulation based on Hamiltonian perturbation approximation is given, and simulation code using smooth accelerator model is also developed. Some issues concerning the practical applications of this technique are discussed

  7. Layer thickness measurement using the X-ray fluorescence principle

    International Nuclear Information System (INIS)

    Mengelkamp, B.

    1980-01-01

    Curium 244 having a gamma energy of about 15.5 keV is used as excitation emitter for contactless and continuous measuring of the thickness of metallic layers on iron strip. Soft gamma radiation is absorbed in matter according to the photo effect, so that X-ray fluorescence radiation is generated in the matter, which depends on the element and is radiated to all sides. For instance, it amounts for iron 6.4 keV and is measured with a specific ionisation chamber for this energy range. With increasing atomic number of the elements, the energy of fluorescence radiation increases and hence also the emission signal of the detector. The prerequisite for a usable measuring effect is an element distance of at least two and the thickness of the layer to be measured being in an optimum range. A signal dependent on the thickness of the layer is produced either by absorption of iron radiation (absorption method - aluminium and tin) or by build-up radiation of the material of the layer (emission method - zinc and lead). (orig./GSCH) [de

  8. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    Science.gov (United States)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  9. Physical and measuring principles of nuclear well logging techniques

    International Nuclear Information System (INIS)

    Loetzsch, U.; Winkler, R.

    1981-01-01

    Proceeding from the general task of nuclear geophysics as a special discipline of applied geophyscis, the essential physical problems of nuclear well logging techniques are considered. Particularly, the quantitative relationship between measured values and interesting geologic parameters to be determined are discussed taking into account internal and external perturbation parameters. Resulting from this study, the technological requirements for radiation sources and their shielding, for detectors, electronic circuits in logging tools, signal transmission by cable and recording equipment are derived, and explained on the basis of examples of gamma-gamma and neutron-neutron logging. (author)

  10. Entanglement of arbitrary superpositions of modes within two-dimensional orbital angular momentum state spaces

    International Nuclear Information System (INIS)

    Jack, B.; Leach, J.; Franke-Arnold, S.; Ireland, D. G.; Padgett, M. J.; Yao, A. M.; Barnett, S. M.; Romero, J.

    2010-01-01

    We use spatial light modulators (SLMs) to measure correlations between arbitrary superpositions of orbital angular momentum (OAM) states generated by spontaneous parametric down-conversion. Our technique allows us to fully access a two-dimensional OAM subspace described by a Bloch sphere, within the higher-dimensional OAM Hilbert space. We quantify the entanglement through violations of a Bell-type inequality for pairs of modal superpositions that lie on equatorial, polar, and arbitrary great circles of the Bloch sphere. Our work shows that SLMs can be used to measure arbitrary spatial states with a fidelity sufficient for appropriate quantum information processing systems.

  11. Thermalization as an Invisibility Cloak for Fragile Quantum Superpositions

    OpenAIRE

    Hahn, Walter; Fine, Boris V.

    2017-01-01

    We propose a method for protecting fragile quantum superpositions in many-particle systems from dephasing by external classical noise. We call superpositions "fragile" if dephasing occurs particularly fast, because the noise couples very differently to the superposed states. The method consists of letting a quantum superposition evolve under the internal thermalization dynamics of the system, followed by a time reversal manipulation known as Loschmidt echo. The thermalization dynamics makes t...

  12. Authentication Protocol using Quantum Superposition States

    Energy Technology Data Exchange (ETDEWEB)

    Kanamori, Yoshito [University of Alaska; Yoo, Seong-Moo [University of Alabama, Huntsville; Gregory, Don A. [University of Alabama, Huntsville; Sheldon, Frederick T [ORNL

    2009-01-01

    When it became known that quantum computers could break the RSA (named for its creators - Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have sufficient computational power to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users. An authentication protocol using quantum superposition states instead of quantum entangled particles is proposed. The random number shared between a sender and a receiver can be used for classical encryption after the authentication has succeeded. The proposed protocol can be implemented with the current technologies we introduce in this paper.

  13. The four principles: Can they be measured and do they predict ethical decision making?

    Directory of Open Access Journals (Sweden)

    Page Katie

    2012-05-01

    Full Text Available Abstract Background The four principles of Beauchamp and Childress - autonomy, non-maleficence, beneficence and justice - have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. Methods The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Results Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. Conclusions People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed.

  14. The four principles: Can they be measured and do they predict ethical decision making?

    Science.gov (United States)

    2012-01-01

    Background The four principles of Beauchamp and Childress - autonomy, non-maleficence, beneficence and justice - have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. Methods The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Results Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. Conclusions People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed. PMID:22606995

  15. Generation of optical coherent state superpositions for quantum information processing

    DEFF Research Database (Denmark)

    Tipsmark, Anders

    2012-01-01

    I dette projektarbejde med titlen “Generation of optical coherent state superpositions for quantum information processing” har målet været at generere optiske kat-tilstande. Dette er en kvantemekanisk superpositions tilstand af to koherente tilstande med stor amplitude. Sådan en tilstand er...

  16. Teleportation of Unknown Superpositions of Collective Atomic Coherent States

    Institute of Scientific and Technical Information of China (English)

    ZHENG ShiBiao

    2001-01-01

    We propose a scheme to teleport an unknown superposition of two atomic coherent states with different phases. Our scheme is based on resonant and dispersive atom-field interaction. Our scheme provides a possibility of teleporting macroscopic superposition states of many atoms first time.``

  17. Adiabatic rotation, quantum search, and preparation of superposition states

    International Nuclear Information System (INIS)

    Siu, M. Stewart

    2007-01-01

    We introduce the idea of using adiabatic rotation to generate superpositions of a large class of quantum states. For quantum computing this is an interesting alternative to the well-studied 'straight line' adiabatic evolution. In ways that complement recent results, we show how to efficiently prepare three types of states: Kitaev's toric code state, the cluster state of the measurement-based computation model, and the history state used in the adiabatic simulation of a quantum circuit. We also show that the method, when adapted for quantum search, provides quadratic speedup as other optimal methods do with the advantages that the problem Hamiltonian is time independent and that the energy gap above the ground state is strictly nondecreasing with time. Likewise the method can be used for optimization as an alternative to the standard adiabatic algorithm

  18. New principle for measuring arterial blood oxygenation, enabling motion-robust remote monitoring

    OpenAIRE

    Mark van Gastel; Sander Stuijk; Gerard de Haan

    2016-01-01

    Finger-oximeters are ubiquitously used for patient monitoring in hospitals worldwide. Recently, remote measurement of arterial blood oxygenation (SpO2) with a camera has been demonstrated. Both contact and remote measurements, however, require the subject to remain static for accurate SpO2 values. This is due to the use of the common ratio-of-ratios measurement principle that measures the relative pulsatility at different wavelengths. Since the amplitudes are small, they are easily corrupted ...

  19. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  20. Electrical and electronic principles

    CERN Document Server

    Knight, S A

    1991-01-01

    Electrical and Electronic Principles, 2, Second Edition covers the syllabus requirements of BTEC Unit U86/329, including the principles of control systems and elements of data transmission. The book first tackles series and parallel circuits, electrical networks, and capacitors and capacitance. Discussions focus on flux density, electric force, permittivity, Kirchhoff's laws, superposition theorem, arrangement of resistors, internal resistance, and powers in a circuit. The text then takes a look at capacitors in circuit, magnetism and magnetization, electromagnetic induction, and alternating v

  1. New principle for measuring arterial blood oxygenation, enabling motion-robust remote monitoring.

    Science.gov (United States)

    van Gastel, Mark; Stuijk, Sander; de Haan, Gerard

    2016-12-07

    Finger-oximeters are ubiquitously used for patient monitoring in hospitals worldwide. Recently, remote measurement of arterial blood oxygenation (SpO 2 ) with a camera has been demonstrated. Both contact and remote measurements, however, require the subject to remain static for accurate SpO 2 values. This is due to the use of the common ratio-of-ratios measurement principle that measures the relative pulsatility at different wavelengths. Since the amplitudes are small, they are easily corrupted by motion-induced variations. We introduce a new principle that allows accurate remote measurements even during significant subject motion. We demonstrate the main advantage of the principle, i.e. that the optimal signature remains the same even when the SNR of the PPG signal drops significantly due to motion or limited measurement area. The evaluation uses recordings with breath-holding events, which induce hypoxemia in healthy moving subjects. The events lead to clinically relevant SpO 2 levels in the range 80-100%. The new principle is shown to greatly outperform current remote ratio-of-ratios based methods. The mean-absolute SpO 2 -error (MAE) is about 2 percentage-points during head movements, where the benchmark method shows a MAE of 24 percentage-points. Consequently, we claim ours to be the first method to reliably measure SpO 2 remotely during significant subject motion.

  2. : Principles of safety measures of sports events organizers without the involvement of police

    OpenAIRE

    Buchalová, Kateřina

    2013-01-01

    Title: Principles of safety measures of sports events organizers without the involvement of police Objectives: The aim of this thesis is a description of security measures at sporting events organizers. Methods: The thesis theoretical style is focused on searching for available sources of study and research, and writing their summary comparing safety measures of the organizers. Results: This work describes the activities of the organizers of sports events and precautions that must be provided...

  3. Optimal simultaneous superpositioning of multiple structures with missing data.

    Science.gov (United States)

    Theobald, Douglas L; Steindel, Phillip A

    2012-08-01

    Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.

  4. Thermalization as an invisibility cloak for fragile quantum superpositions

    Science.gov (United States)

    Hahn, Walter; Fine, Boris V.

    2017-07-01

    We propose a method for protecting fragile quantum superpositions in many-particle systems from dephasing by external classical noise. We call superpositions "fragile" if dephasing occurs particularly fast, because the noise couples very differently to the superposed states. The method consists of letting a quantum superposition evolve under the internal thermalization dynamics of the system, followed by a time-reversal manipulation known as Loschmidt echo. The thermalization dynamics makes the superposed states almost indistinguishable during most of the above procedure. We validate the method by applying it to a cluster of spins ½.

  5. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  6. Field testing, comparison, and discussion of five aeolian sand transport measuring devices operating on different measuring principles

    NARCIS (Netherlands)

    Goossens, Dirk; Nolet, Corjan; Etyemezian, Vicken; Duarte-campos, Leonardo; Bakker, Gerben; Riksen, Michel

    2018-01-01

    Five types of sediment samplers designed to measure aeolian sand transport were tested during a wind erosion event on the Sand Motor, an area on the west coast of the Netherlands prone to severe wind erosion. Each of the samplers operates on a different principle. The MWAC (Modified Wilson And

  7. Generating superpositions of higher order bessel beams [Conference paper

    CSIR Research Space (South Africa)

    Vasilyeu, R

    2009-10-01

    Full Text Available An experimental setup to generate a superposition of higher-order Bessel beams by means of a spatial light modulator and ring aperture is presented. The experimentally produced fields are in good agreement with those calculated theoretically....

  8. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau; Shihada, Basem; Pin-Han Ho

    2013-01-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However

  9. Quantum State Engineering Via Coherent-State Superpositions

    Science.gov (United States)

    Janszky, Jozsef; Adam, P.; Szabo, S.; Domokos, P.

    1996-01-01

    The quantum interference between the two parts of the optical Schrodinger-cat state makes possible to construct a wide class of quantum states via discrete superpositions of coherent states. Even a small number of coherent states can approximate the given quantum states at a high accuracy when the distance between the coherent states is optimized, e. g. nearly perfect Fock state can be constructed by discrete superpositions of n + 1 coherent states lying in the vicinity of the vacuum state.

  10. A principle for the noninvasive measurement of steady-state heat transfer parameters in living tissues

    Directory of Open Access Journals (Sweden)

    S. Yu. Makarov

    2014-01-01

    Full Text Available Measuring the parameters of biological tissues (include in vivo is of great importance for medical diagnostics. For example, the value of the blood perfusion parameter is associated with the state of the blood microcirculation system and its functioning affects the state of the tissues of almost all organs. This work describes a previously proposed principle [1] in generalized terms. The principle is intended for noninvasive measuring the parameters of stationary heat transfer in biological tissues. The results of some experiments (natural and numeric are also presented in the research.For noninvasive measurement of thermophysical parameters a number of techniques have been developed using non-stationary thermal process in biological tissue [2][3]. But these techniques require the collecting a lot of data to represent the time-dependent thermal signal. In addition, subsequent processing with specialized algorithms is required for optimal selecting the parameters. The goal of this research is to develop an alternative approach using stationary thermal process for non-invasive measuring the parameters of stationary heat transfer in living tissues.A general principle can be formulated for the measurement methods based on this approach. Namely, the variations (changes of two physical values are measured in the experiment at the transition from one thermal stationary state to another. One of these two physical values unambiguously determines the stationary thermal field into the biological tissue under specified experimental conditions while the other one is unambiguously determined through the thermal field. Then, the parameters can be found from the numerical (or analytical functional dependencies linking the measured variations because the dependencies contain unknown parameters.The dependencies are expressed in terms of the formula:dqi = fi({pj},Ui dUi,Here dqi is a variation of a physical value q which is unambiguously determined from the

  11. A method to study the characteristics of 3D dose distributions created by superposition of many intensity-modulated beams delivered via a slit aperture with multiple absorbing vanes

    International Nuclear Information System (INIS)

    Webb, S.; Oldham, M.

    1996-01-01

    Highly conformal dose distributions can be created by the superposition of many radiation fields from different directions, each with its intensity spatially modulated by the method known as tomotherapy. At the planning stage, the intensity of radiation of each beam element (or bixel) is determined by working out the effect of superposing the radiation through all bixels with the elemental dose distribution specified as that from a single bixel with all its neighbours closed (the 'independent-vane' (IV) model). However, at treatment-delivery stage, neighbouring bixels may not be closed. Instead the slit beam is delivered with parts of the beam closed for different periods of time to create the intensity modulation. As a result, the 3D dose distribution actually delivered will differ from that determined at the planning stage if the elemental beams do not obey the superposition principle. The purpose of this paper is to present a method to investigate and quantify the relation between planned and delivered 3D dose distributions. Two modes of inverse planning have been performed: (i) with a fit to the measured elemental dose distribution and (ii) with a 'stretched fit' obeying the superposition principle as in the PEACOCK 3D planning system. The actual delivery has been modelled as a series of component deliveries (CDs). The algorithm for determining the component intensities and the appropriate collimation conditions is specified. The elemental beam from the NOMOS MIMiC collimator is too narrow to obey the superposition principle although it can be 'stretched' and fitted to a superposition function. Hence there are differences between the IV plans made using modes (i) and (ii) and the raw and the stretched elemental beam, and also differences with CD delivery. This study shows that the differences between IV and CD dose distributions are smaller for mode (ii) inverse planning than for mode (i), somewhat justifying the way planning is done within PEACOCK. Using a

  12. Linear dynamic analysis of arbitrary thin shells modal superposition by using finite element method

    International Nuclear Information System (INIS)

    Goncalves Filho, O.J.A.

    1978-11-01

    The linear dynamic behaviour of arbitrary thin shells by the Finite Element Method is studied. Plane triangular elements with eighteen degrees of freedom each are used. The general equations of movement are obtained from the Hamilton Principle and solved by the Modal Superposition Method. The presence of a viscous type damping can be considered by means of percentages of the critical damping. An automatic computer program was developed to provide the vibratory properties and the dynamic response to several types of deterministic loadings, including temperature effects. The program was written in FORTRAN IV for the Burroughs B-6700 computer. (author)

  13. Microwave measurement of electrical fields in different media – principles, methods and instrumentation

    International Nuclear Information System (INIS)

    St. Kliment Ohridski, Faculty of Physics, James Bourchier blvd., Sofia 1164 (Bulgaria))" data-affiliation=" (Sofia University St. Kliment Ohridski, Faculty of Physics, James Bourchier blvd., Sofia 1164 (Bulgaria))" >Dankov, Plamen I

    2014-01-01

    This paper, presented in the frame of 4th International Workshop and Summer School on Plasma Physics (IWSSPP'2010, Kiten, Bulgaria), is a brief review of the principles, methods and instrumentation of the microwave measurements of electrical fields in different media. The main part of the paper is connected with the description of the basic features of many field sensors and antennas – narrow-, broadband and ultra-wide band, miniaturized, reconfigurable and active sensors, etc. The main features and applicability of these sensors for determination of electric fields in different media is discussed. The last part of the paper presents the basic principles for utilization of electromagnetic 3-D simulators for E-field measurement purposes. Two illustrative examples have been given – the determination of the dielectric anisotropy of multi-layer materials and discussion of the selectivity of hairpin-probe for determination of the electron density in dense gaseous plasmas.

  14. First-Principles Definition and Measurement of Planetary Electromagnetic-Energy Budget

    Science.gov (United States)

    Mishchenko, Michael I.; Lock, James A.; Lacis, Andrew A.; Travis, Larry D.; Cairns, Brian

    2016-01-01

    The imperative to quantify the Earths electromagnetic-energy budget with an extremely high accuracy has been widely recognized but has never been formulated in the framework of fundamental physics. In this paper we give a first-principles definition of the planetary electromagnetic-energy budget using the Poynting- vector formalism and discuss how it can, in principle, be measured. Our derivation is based on an absolute minimum of theoretical assumptions, is free of outdated notions of phenomenological radiometry, and naturally leads to the conceptual formulation of an instrument called the double hemispherical cavity radiometer (DHCR). The practical measurement of the planetary energy budget would require flying a constellation of several dozen planet-orbiting satellites hosting identical well-calibrated DHCRs.

  15. Equivalence principle and quantum mechanics: quantum simulation with entangled photons.

    Science.gov (United States)

    Longhi, S

    2018-01-15

    Einstein's equivalence principle (EP) states the complete physical equivalence of a gravitational field and corresponding inertial field in an accelerated reference frame. However, to what extent the EP remains valid in non-relativistic quantum mechanics is a controversial issue. To avoid violation of the EP, Bargmann's superselection rule forbids a coherent superposition of states with different masses. Here we suggest a quantum simulation of non-relativistic Schrödinger particle dynamics in non-inertial reference frames, which is based on the propagation of polarization-entangled photon pairs in curved and birefringent optical waveguides and Hong-Ou-Mandel quantum interference measurement. The photonic simulator can emulate superposition of mass states, which would lead to violation of the EP.

  16. Basing of principles and methods of operation of radiometric control and measurement systems

    International Nuclear Information System (INIS)

    Onishchenko, A.M.

    1995-01-01

    Six basic stages of optimization of radiometric systems, methods of defining the preset components of total error and the choice of principles and methods of measurement are described in succession. The possibility of simultaneous optimization of several stages, turning back to the already passed stages, is shown. It is suggested that components of the total error should be preset as identical ones for methodical, instrument, occasional and representativity errors and the greatest of the components should be decreased first of all. Comparative table for 64 radiometric methods of measurement by 11 indices of the methods quality is presented. 2 refs., 1 tab

  17. Density measurement by means of once scattered gamma radiation the ETG probe, principles and equipment

    International Nuclear Information System (INIS)

    Joergensen, J.L.; Oelgaard, P.L.; Berg, F.

    1987-01-01

    The Department of Electrophysics, the Technical University of Denmark, and the Danish National Road Laboratory have together developed a new patent claimed device for measurements of the in situ density of materials. This report describes the principles of the system and some experimental results. The system is based on the once scattered gamma radiation. In a totally non-destructive and fast way it is possible to measure the density of up to 25 cm thick layers. Furthermore, an estimate of the density variation through the layer may be obtained. Thus the gauge represents a new generation of equipment for e.g. compaction control of road constructions. (author)

  18. The Principle of Advertising as a Measure of the Essential Control of State Acts

    Directory of Open Access Journals (Sweden)

    Osvaldo Resende Neto Resende Neto

    2016-10-01

    Full Text Available Brazilian citizen has seen several scandals related to corruption, leading to an outcry for the adoption of effective measures to combat impunity. Emerges the importance of the principle of publicity as an important tool for democratic control, extending far beyond the limits of public administration in management and procedural situations. The undertaken goal here is to outline the importance of advertising in the effectiveness of legal measures for the prevention and repression of misuse of the exchequer. Using the inductive method, it was conducted a systematic research on national bibliography, exploring existing and revoked legislation on the subject.

  19. Non-coaxial superposition of vector vortex beams.

    Science.gov (United States)

    Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P

    2016-02-10

    Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory.

  20. Entanglement and quantum superposition induced by a single photon

    Science.gov (United States)

    Lü, Xin-You; Zhu, Gui-Lei; Zheng, Li-Li; Wu, Ying

    2018-03-01

    We predict the occurrence of single-photon-induced entanglement and quantum superposition in a hybrid quantum model, introducing an optomechanical coupling into the Rabi model. Originally, it comes from the photon-dependent quantum property of the ground state featured by the proposed hybrid model. It is associated with a single-photon-induced quantum phase transition, and is immune to the A2 term of the spin-field interaction. Moreover, the obtained quantum superposition state is actually a squeezed cat state, which can significantly enhance precision in quantum metrology. This work offers an approach to manipulate entanglement and quantum superposition with a single photon, which might have potential applications in the engineering of new single-photon quantum devices, and also fundamentally broaden the regime of cavity QED.

  1. Robust mesoscopic superposition of strongly correlated ultracold atoms

    International Nuclear Information System (INIS)

    Hallwood, David W.; Ernst, Thomas; Brand, Joachim

    2010-01-01

    We propose a scheme to create coherent superpositions of annular flow of strongly interacting bosonic atoms in a one-dimensional ring trap. The nonrotating ground state is coupled to a vortex state with mesoscopic angular momentum by means of a narrow potential barrier and an applied phase that originates from either rotation or a synthetic magnetic field. We show that superposition states in the Tonks-Girardeau regime are robust against single-particle loss due to the effects of strong correlations. The coupling between the mesoscopically distinct states scales much more favorably with particle number than in schemes relying on weak interactions, thus making particle numbers of hundreds or thousands feasible. Coherent oscillations induced by time variation of parameters may serve as a 'smoking gun' signature for detecting superposition states.

  2. Psychometric Principles in Measurement for Geoscience Education Research: A Climate Change Example

    Science.gov (United States)

    Libarkin, J. C.; Gold, A. U.; Harris, S. E.; McNeal, K.; Bowles, R.

    2015-12-01

    Understanding learning in geoscience classrooms requires that we use valid and reliable instruments aligned with intended learning outcomes. Nearly one hundred instruments assessing conceptual understanding in undergraduate science and engineering classrooms (often called concept inventories) have been published and are actively being used to investigate learning. The techniques used to develop these instruments vary widely, often with little attention to psychometric principles of measurement. This paper will discuss the importance of using psychometric principles to design, evaluate, and revise research instruments, with particular attention to the validity and reliability steps that must be undertaken to ensure that research instruments are providing meaningful measurement. An example from a climate change inventory developed by the authors will be used to exemplify the importance of validity and reliability, including the value of item response theory for instrument development. A 24-item instrument was developed based on published items, conceptions research, and instructor experience. Rasch analysis of over 1000 responses provided evidence for the removal of 5 items for misfit and one item for potential bias as measured via differential item functioning. The resulting 18-item instrument can be considered a valid and reliable measure based on pre- and post-implementation metrics. Consideration of the relationship between respondent demographics and concept inventory scores provides unique insight into the relationship between gender, religiosity, values and climate change understanding.

  3. The measurement of principled morality by the Kohlberg Moral Dilemma Questionnaire.

    Science.gov (United States)

    Heilbrun, A B; Georges, M

    1990-01-01

    The four stages preceding the postconventional level in the Kohlberg (1958, 1971, 1976) system of moral development are described as involving moral judgments that conform to external conditions of punishment, reward, social expectation, and conformity to the law. No special level of self-control seems necessary to behave in keeping with these conditions of external reinforcement. In contrast, the two stages of postconventional (principled) mortality involve defiance of majority opinion and defiance of the law--actions that would seem to require greater self-control. This study was concerned with whether postconventional moral reasoning, as measured by the Kohlberg Moral Dilemma Questionnaire (MDQ), can be associated with higher self-control. If so, prediction of principled moral behavior from the MDQ would be based not only on postconventional moral reasoning but bolstered by the necessary level of self-control as well. College students who came the closest to postconventional moral reasoning showed better self-control than college students who were more conventional or preconventional in their moral judgments. These results support the validity of the MDQ for predicting principled moral behavior.

  4. Superposition of helical beams by using a Michelson interferometer.

    Science.gov (United States)

    Gao, Chunqing; Qi, Xiaoqing; Liu, Yidong; Weber, Horst

    2010-01-04

    Orbital angular momentum (OAM) of a helical beam is of great interests in the high density optical communication due to its infinite number of eigen-states. In this paper, an experimental setup is realized to the information encoding and decoding on the OAM eigen-states. A hologram designed by the iterative method is used to generate the helical beams, and a Michelson interferometer with two Porro prisms is used for the superposition of two helical beams. The experimental results of the collinear superposition of helical beams and their OAM eigen-states detection are presented.

  5. Quantum equivalence principle without mass superselection

    International Nuclear Information System (INIS)

    Hernandez-Coronado, H.; Okon, E.

    2013-01-01

    The standard argument for the validity of Einstein's equivalence principle in a non-relativistic quantum context involves the application of a mass superselection rule. The objective of this work is to show that, contrary to widespread opinion, the compatibility between the equivalence principle and quantum mechanics does not depend on the introduction of such a restriction. For this purpose, we develop a formalism based on the extended Galileo group, which allows for a consistent handling of superpositions of different masses, and show that, within such scheme, mass superpositions behave as they should in order to obey the equivalence principle. - Highlights: • We propose a formalism for consistently handling, within a non-relativistic quantum context, superpositions of states with different masses. • The formalism utilizes the extended Galileo group, in which mass is a generator. • The proposed formalism allows for the equivalence principle to be satisfied without the need of imposing a mass superselection rule

  6. Thermographic Phosphors for High Temperature Measurements: Principles, Current State of the Art and Recent Applications

    Directory of Open Access Journals (Sweden)

    Konstantinos Kontis

    2008-09-01

    Full Text Available This paper reviews the state of phosphor thermometry, focusing on developments in the past 15 years. The fundamental principles and theory are presented, and the various spectral and temporal modes, including the lifetime decay, rise time and intensity ratio, are discussed. The entire phosphor measurement system, including relative advantages to conventional methods, choice of phosphors, bonding techniques, excitation sources and emission detection, is reviewed. Special attention is given to issues that may arise at high temperatures. A number of recent developments and applications are surveyed, with examples including: measurements in engines, hypersonic wind tunnel experiments, pyrolysis studies and droplet/spray/gas temperature determination. They show the technique is flexible and successful in measuring temperatures where conventional methods may prove to be unsuitable.

  7. Time-temperature superposition in viscous liquids

    DEFF Research Database (Denmark)

    Olsen, Niels Boye; Dyre, Jeppe; Christensen, Tage Emil

    2001-01-01

    with a reduced time definition based on a recently proposed expression for the relaxation time, where G [infinity] reflects the fictive temperature. All parameters entering the reduced time were determined from independent measurements of the frequency-dependent shear modulus of the equilibrium liquid....

  8. Linear Plasma Oscillation Described by Superposition of Normal Modes

    DEFF Research Database (Denmark)

    Pécseli, Hans

    1974-01-01

    The existence of steady‐state solutions to the linearized ion and electron Vlasov equation is demonstrated for longitudinal waves in an initially stable plasma. The evolution of an arbitrary initial perturbation can be described by superposition of these solutions. Some common approximations...

  9. Generating superpositions of higher–order Bessel beams [Journal article

    CSIR Research Space (South Africa)

    Vasilyeu, R

    2009-12-01

    Full Text Available The authors report the first experimental generation of the superposition of higher-order Bessel beams, by means of a spatial light modulator (SLM) and a ring slit aperture. They present illuminating a ring slit aperture with light which has...

  10. Spectral properties of superpositions of Ornstein-Uhlenbeck type processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Leonenko, N.N.

    2005-01-01

    Stationary processes with prescribed one-dimensional marginal laws and long-range dependence are constructed. The asymptotic properties of the spectral densities are studied. The possibility of Mittag-Leffler decay in the autocorrelation function of superpositions of Ornstein-Uhlenbeck type...... processes is proved....

  11. On some properties of the superposition operator on topological manifolds

    Directory of Open Access Journals (Sweden)

    Janusz Dronka

    2010-01-01

    Full Text Available In this paper the superposition operator in the space of vector-valued, bounded and continuous functions on a topological manifold is considered. The acting conditions and criteria of continuity and compactness are established. As an application, an existence result for the nonlinear Hammerstein integral equation is obtained.

  12. Fundamental principles of quantum theory

    International Nuclear Information System (INIS)

    Bugajski, S.

    1980-01-01

    After introducing general versions of three fundamental quantum postulates - the superposition principle, the uncertainty principle and the complementarity principle - the question of whether the three principles are sufficiently strong to restrict the general Mackey description of quantum systems to the standard Hilbert-space quantum theory is discussed. An example which shows that the answer must be negative is constructed. An abstract version of the projection postulate is introduced and it is demonstrated that it could serve as the missing physical link between the general Mackey description and the standard quantum theory. (author)

  13. Measures of Coupling between Neural Populations Based on Granger Causality Principle.

    Science.gov (United States)

    Kaminski, Maciej; Brzezicka, Aneta; Kaminski, Jan; Blinowska, Katarzyna J

    2016-01-01

    This paper shortly reviews the measures used to estimate neural synchronization in experimental settings. Our focus is on multivariate measures of dependence based on the Granger causality (G-causality) principle, their applications and performance in respect of robustness to noise, volume conduction, common driving, and presence of a "weak node." Application of G-causality measures to EEG, intracranial signals and fMRI time series is addressed. G-causality based measures defined in the frequency domain allow the synchronization between neural populations and the directed propagation of their electrical activity to be determined. The time-varying G-causality based measure Short-time Directed Transfer Function (SDTF) supplies information on the dynamics of synchronization and the organization of neural networks. Inspection of effective connectivity patterns indicates a modular structure of neural networks, with a stronger coupling within modules than between them. The hypothetical plausible mechanism of information processing, suggested by the identified synchronization patterns, is communication between tightly coupled modules intermitted by sparser interactions providing synchronization of distant structures.

  14. Measures of coupling between neural populations based on Granger causality principle

    Directory of Open Access Journals (Sweden)

    Maciej Kaminski

    2016-10-01

    Full Text Available This paper shortly reviews the measures used to estimate neural synchronization in experimental settings. Our focus is on multivariate measures of dependence based on the Granger causality (G-causality principle, their applications and performance in respect of robustness to noise, volume conduction, common driving, and presence of a weak node. Application of G-causality measures to EEG, intracranial signals and fMRI time series is addressed. G-causality based measures defined in the frequency domain allow the synchronization between neural populations and the directed propagation of their electrical activity to be determined. The time-varying G-causality based measure Short-time Directed Transfer Function (SDTF supplies information on the dynamics of synchronization and the organization of neural networks. Inspection of effective connectivity patterns indicates a modular structure of neural networks, with a stronger coupling within modules than between them. The hypothetical plausible mechanism of information processing, suggested by the identified synchronization patterns, is communication between tightly coupled modules intermitted by sparser interactions providing synchronization of distant structures.

  15. Field testing, comparison, and discussion of five aeolian sand transport measuring devices operating on different measuring principles

    Science.gov (United States)

    Goossens, Dirk; Nolet, Corjan; Etyemezian, Vicken; Duarte-Campos, Leonardo; Bakker, Gerben; Riksen, Michel

    2018-06-01

    Five types of sediment samplers designed to measure aeolian sand transport were tested during a wind erosion event on the Sand Motor, an area on the west coast of the Netherlands prone to severe wind erosion. Each of the samplers operates on a different principle. The MWAC (Modified Wilson And Cooke) is a passive segmented trap. The modified Leatherman sampler is a passive vertically integrating trap. The Saltiphone is an acoustic sampler that registers grain impacts on a microphone. The Wenglor sampler is an optical sensor that detects particles as they pass through a laser beam. The SANTRI (Standalone AeoliaN Transport Real-time Instrument) detects particles travelling through an infrared beam, but in different channels each associated with a particular grain size spectrum. A procedure is presented to transform the data output, which is different for each sampler, to a common standard so that the samplers can be objectively compared and their relative efficiency calculated. Results show that the efficiency of the samplers is comparable despite the differences in operating principle and the instrumental and environmental uncertainties associated to working with particle samplers in field conditions. The ability of the samplers to register the temporal evolution of a wind erosion event is investigated. The strengths and weaknesses of the samplers are discussed. Some problems inherent to optical sensors are looked at in more detail. Finally, suggestions are made for further improvement of the samplers.

  16. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    Science.gov (United States)

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-03-12

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.

  17. Principles and applications of measurement and uncertainty analysis in research and calibration

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C.V.

    1992-11-01

    Interest in Measurement Uncertainty Analysis has grown in the past several years as it has spread to new fields of application, and research and development of uncertainty methodologies have continued. This paper discusses the subject from the perspectives of both research and calibration environments. It presents a history of the development and an overview of the principles of uncertainty analysis embodied in the United States National Standard, ANSI/ASME PTC 19.1-1985, Measurement Uncertainty. Examples are presented in which uncertainty analysis was utilized or is needed to gain further knowledge of a particular measurement process and to characterize final results. Measurement uncertainty analysis provides a quantitative estimate of the interval about a measured value or an experiment result within which the true value of that quantity is expected to lie. Years ago, Harry Ku of the United States National Bureau of Standards stated that ``The informational content of the statement of uncertainty determines, to a large extent, the worth of the calibrated value.`` Today, that statement is just as true about calibration or research results as it was in 1968. Why is that true? What kind of information should we include in a statement of uncertainty accompanying a calibrated value? How and where do we get the information to include in an uncertainty statement? How should we interpret and use measurement uncertainty information? This discussion will provide answers to these and other questions about uncertainty in research and in calibration. The methodology to be described has been developed by national and international groups over the past nearly thirty years, and individuals were publishing information even earlier. Yet the work is largely unknown in many science and engineering arenas. I will illustrate various aspects of uncertainty analysis with some examples drawn from the radiometry measurement and calibration discipline from research activities.

  18. Principles and applications of measurement and uncertainty analysis in research and calibration

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C.V.

    1992-11-01

    Interest in Measurement Uncertainty Analysis has grown in the past several years as it has spread to new fields of application, and research and development of uncertainty methodologies have continued. This paper discusses the subject from the perspectives of both research and calibration environments. It presents a history of the development and an overview of the principles of uncertainty analysis embodied in the United States National Standard, ANSI/ASME PTC 19.1-1985, Measurement Uncertainty. Examples are presented in which uncertainty analysis was utilized or is needed to gain further knowledge of a particular measurement process and to characterize final results. Measurement uncertainty analysis provides a quantitative estimate of the interval about a measured value or an experiment result within which the true value of that quantity is expected to lie. Years ago, Harry Ku of the United States National Bureau of Standards stated that The informational content of the statement of uncertainty determines, to a large extent, the worth of the calibrated value.'' Today, that statement is just as true about calibration or research results as it was in 1968. Why is that true What kind of information should we include in a statement of uncertainty accompanying a calibrated value How and where do we get the information to include in an uncertainty statement How should we interpret and use measurement uncertainty information This discussion will provide answers to these and other questions about uncertainty in research and in calibration. The methodology to be described has been developed by national and international groups over the past nearly thirty years, and individuals were publishing information even earlier. Yet the work is largely unknown in many science and engineering arenas. I will illustrate various aspects of uncertainty analysis with some examples drawn from the radiometry measurement and calibration discipline from research activities.

  19. Transforming spatial point processes into Poisson processes using random superposition

    DEFF Research Database (Denmark)

    Møller, Jesper; Berthelsen, Kasper Klitgaaard

    with a complementary spatial point process Y  to obtain a Poisson process X∪Y  with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking...

  20. Coherent inflation for large quantum superpositions of levitated microspheres

    Science.gov (United States)

    Romero-Isart, Oriol

    2017-12-01

    We show that coherent inflation (CI), namely quantum dynamics generated by inverted conservative potentials acting on the center of mass of a massive object, is an enabling tool to prepare large spatial quantum superpositions in a double-slit experiment. Combined with cryogenic, extreme high vacuum, and low-vibration environments, we argue that it is experimentally feasible to exploit CI to prepare the center of mass of a micrometer-sized object in a spatial quantum superposition comparable to its size. In such a hitherto unexplored parameter regime gravitationally-induced decoherence could be unambiguously falsified. We present a protocol to implement CI in a double-slit experiment by letting a levitated microsphere traverse a static potential landscape. Such a protocol could be experimentally implemented with an all-magnetic scheme using superconducting microspheres.

  1. Improved superposition schemes for approximate multi-caloron configurations

    International Nuclear Information System (INIS)

    Gerhold, P.; Ilgenfritz, E.-M.; Mueller-Preussker, M.

    2007-01-01

    Two improved superposition schemes for the construction of approximate multi-caloron-anti-caloron configurations, using exact single (anti-)caloron gauge fields as underlying building blocks, are introduced in this paper. The first improvement deals with possible monopole-Dirac string interactions between different calorons with non-trivial holonomy. The second one, based on the ADHM formalism, improves the (anti-)selfduality in the case of small caloron separations. It conforms with Shuryak's well-known ratio-ansatz when applied to instantons. Both superposition techniques provide a higher degree of (anti-)selfduality than the widely used sum-ansatz, which simply adds the (anti)caloron vector potentials in an appropriate gauge. Furthermore, the improved configurations (when discretized onto a lattice) are characterized by a higher stability when they are exposed to lattice cooling techniques

  2. Complementary Huygens principle for geometrical and nongeometrical optics

    International Nuclear Information System (INIS)

    Luis, Alfredo

    2007-01-01

    We develop a fundamental principle depicting the generalized ray formulation of optics provided by the Wigner function. This principle is formally identical to the Huygens-Fresnel principle but in terms of opposite concepts, rays instead of waves, and incoherent superpositions instead of coherent ones. This ray picture naturally includes diffraction and interference, and provides a geometrical picture of the degree of coherence

  3. Complementary Huygens Principle for Geometrical and Nongeometrical Optics

    Science.gov (United States)

    Luis, Alfredo

    2007-01-01

    We develop a fundamental principle depicting the generalized ray formulation of optics provided by the Wigner function. This principle is formally identical to the Huygens-Fresnel principle but in terms of opposite concepts, rays instead of waves, and incoherent superpositions instead of coherent ones. This ray picture naturally includes…

  4. The study on the Sensorless PMSM Control using the Superposition Theory

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Joung Pyo [Changwon National University, Changwon (Korea); Kwon, Soon Jae [Pukung National University, Seoul (Korea); Kim, Gyu Seob; Sohn, Mu Heon; Kim, Jong Dal [Dongmyung College, Pusan (Korea)

    2002-07-01

    This study presents a solution to control a Permanent Magnet Synchronous Motor without sensors. The control method is the presented superposition principle. This method of sensorless theory is very simple to compute estimated angle. Therefore computing time to estimate angle is shorter than other sensorless method. The use of this system yields enhanced operations, fewer system components, lower system cost, energy efficient control system design and increased deficiency. A practical solution is described and results are given in this Study. The performance of a Sensorless architecture allows an intelligent approach to reduce the complete system costs of digital motion control applications using cheaper electrical motors without sensors. This paper deals with an overview of sensorless solutions in PMSM control applications whereby the focus will be the new controller without sensors and its applications. (author). 6 refs., 16 figs., 1 tab.

  5. General principles governing sampling and measurement techniques for monitoring radioactive effluents from nuclear facilities

    International Nuclear Information System (INIS)

    Fitoussi, L.

    1978-01-01

    An explanation is given of the need to monitor the release of radioactive gases and liquid effluents from nuclear facilities, with particular emphasis on the ICRP recommendations and on the interest in this problem shown by the larger international organizations. This is followed by a description of the classes of radionuclides that are normally monitored in this way. The characteristics of monitoring 'in line' and 'by sample taking' are described; the disadvantages of in line monitoring and the problem of sample representativity are discussed. There follows an account of the general principles for measuring gaseous and liquid effluents that are applied in the techniques normally employed at nuclear facilities. Standards relating to the specifications for monitoring instruments are at present being devised by the International Electrotechnical Commission, and there are still major differences in national practices, at least as far as measurement thresholds are concerned. In conclusion, it is shown that harmonization of practices and standardization of equipment would probably help to make international relations in the field more productive. (author)

  6. Study of principle error sources in gamma spectrometry. Application to cross sections measurement

    International Nuclear Information System (INIS)

    Majah, M. Ibn.

    1985-01-01

    The principle error sources in gamma spectrometry have been studied in purpose to measure cross sections with great precision. Three error sources have been studied: dead time and pile up which depend on counting rate, and coincidence effect that depends on the disintegration scheme of the radionuclide in question. A constant frequency pulse generator has been used to correct the counting loss due to dead time and pile up in cases of long and short disintegration periods. The loss due to coincidence effect can reach 25% and over, depending on the disintegration scheme and on the distance source-detector. After establishing the correction formula and verifying its validity for four examples: iron 56, scandium 48, antimony 120 and gold 196 m, an application has been done by measuring cross sections of nuclear reactions that lead to long disintegration periods which need short distance source-detector counting and thus correcting the loss due to dead time effect, pile up and coincidence effect. 16 refs., 45 figs., 25 tabs. (author)

  7. Entanglement and discord of the superposition of Greenberger-Horne-Zeilinger states

    International Nuclear Information System (INIS)

    Parashar, Preeti; Rana, Swapan

    2011-01-01

    We calculate the analytic expression for geometric measure of entanglement for arbitrary superposition of two N-qubit canonical orthonormal Greenberger-Horne-Zeilinger (GHZ) states and the same for two W states. In the course of characterizing all kinds of nonclassical correlations, an explicit formula for quantum discord (via relative entropy) for the former class of states has been presented. Contrary to the GHZ state, the closest separable state to the W state is not classical. Therefore, in this case, the discord is different from the relative entropy of entanglement. We conjecture that the discord for the N-qubit W state is log 2 N.

  8. Absolute distance measurement with micrometer accuracy using a Michelson interferometer and the iterative synthetic wavelength principle.

    Science.gov (United States)

    Alzahrani, Khaled; Burton, David; Lilley, Francis; Gdeisat, Munther; Bezombes, Frederic; Qudeisat, Mohammad

    2012-02-27

    We present a novel system that can measure absolute distances of up to 300 mm with an uncertainty of the order of one micrometer, within a timeframe of 40 seconds. The proposed system uses a Michelson interferometer, a tunable laser, a wavelength meter and a computer for analysis. The principle of synthetic wave creation is used in a novel way in that the system employs an initial low precision estimate of the distance, obtained using a triangulation, or time-of-flight, laser system, or similar, and then iterates through a sequence of progressively smaller synthetic wavelengths until it reaches micrometer uncertainties in the determination of the distance. A further novel feature of the system is its use of Fourier transform phase analysis techniques to achieve sub-wavelength accuracy. This method has the major advantages of being relatively simple to realize, offering demonstrated high relative precisions better than 5 × 10(-5). Finally, the fact that this device does not require a continuous line-of-sight to the target as is the case with other configurations offers significant advantages.

  9. Superposition in quantum and relativity physics: an interaction interpretation of special relativity theory. III

    International Nuclear Information System (INIS)

    Schlegel, R.

    1975-01-01

    With the interaction interpretation, the Lorentz transformation of a system arises with selection from a superposition of its states in an observation-interaction. Integration of momentum states of a mass over all possible velocities gives the rest-mass energy. Static electrical and magnetic fields are not found to form such a superposition and are to be taken as irreducible elements. The external superposition consists of those states that are reached only by change of state of motion, whereas the internal superposition contains all the states available to an observer in a single inertial coordinate system. The conjecture is advanced that states of superposition may only be those related by space-time transformations (Lorentz transformations plus space inversion and charge conjugation). The continuum of external and internal superpositions is examined for various masses, and an argument for the unity of the superpositions is presented

  10. Quantum-mechanical Green's functions and nonlinear superposition law

    International Nuclear Information System (INIS)

    Nassar, A.B.; Bassalo, J.M.F.; Antunes Neto, H.S.; Alencar, P. de T.S.

    1986-01-01

    The quantum-mechanical Green's function is derived for the problem of a time-dependent variable mass particle subject to a time-dependent forced harmonic oscillator potential by taking direct recourse of the corresponding Schroedinger equation. Through the usage of the nonlinear superposition law of Ray and Reid, it is shown that such a Green's function can be obtained from that for the problem of a particle with unit (constant) mass subject to either a forced harmonic potential with constant frequency or only to a time-dependent linear field. (Author) [pt

  11. Quantum-mechanical Green's function and nonlinear superposition law

    International Nuclear Information System (INIS)

    Nassar, A.B.; Bassalo, J.M.F.; Antunes Neto, H.S.; Alencar, P.T.S.

    1986-01-01

    It is derived the quantum-mechanical Green's function for the problem of a time-dependent variable mass particle subject to a time-dependent forced harmonic-oscillator potential by taking direct recourse of the corresponding Schroedinger equation. Through the usage of the nonlinear superposition law of Ray and Reid, it is shown that such a Green's function can be obtained from that for the problem of a particle with unit (constant) mass subject to either a forced harmonic potential with constant frequency or only to a time-dependent linear field

  12. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong

    2013-03-01

    In this paper we consider a wireless multimedia system by mapping scalable video coded (SVC) bit stream upon superposition coded (SPC) signals, referred to as (SVC-SPC) architecture. Empirical experiments using a software-defined radio(SDR) emulator are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer should be approximately four times higher than the power allocated to the enhancement layer.

  13. On Kolmogorov's superpositions and Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  14. Push-pull optical pumping of pure superposition states

    International Nuclear Information System (INIS)

    Jau, Y.-Y.; Miron, E.; Post, A.B.; Kuzma, N.N.; Happer, W.

    2004-01-01

    A new optical pumping method, 'push-pull pumping', can produce very nearly pure, coherent superposition states between the initial and the final sublevels of the important field-independent 0-0 clock resonance of alkali-metal atoms. The key requirement for push-pull pumping is the use of D1 resonant light which alternates between left and right circular polarization at the Bohr frequency of the state. The new pumping method works for a wide range of conditions, including atomic beams with almost no collisions, and atoms in buffer gases with pressures of many atmospheres

  15. SUPERPOSITION OF STOCHASTIC PROCESSES AND THE RESULTING PARTICLE DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Schwadron, N. A.; Dayeh, M. A.; Desai, M.; Fahr, H.; Jokipii, J. R.; Lee, M. A.

    2010-01-01

    Many observations of suprathermal and energetic particles in the solar wind and the inner heliosheath show that distribution functions scale approximately with the inverse of particle speed (v) to the fifth power. Although there are exceptions to this behavior, there is a growing need to understand why this type of distribution function appears so frequently. This paper develops the concept that a superposition of exponential and Gaussian distributions with different characteristic speeds and temperatures show power-law tails. The particular type of distribution function, f ∝ v -5 , appears in a number of different ways: (1) a series of Poisson-like processes where entropy is maximized with the rates of individual processes inversely proportional to the characteristic exponential speed, (2) a series of Gaussian distributions where the entropy is maximized with the rates of individual processes inversely proportional to temperature and the density of individual Gaussian distributions proportional to temperature, and (3) a series of different diffusively accelerated energetic particle spectra with individual spectra derived from observations (1997-2002) of a multiplicity of different shocks. Thus, we develop a proof-of-concept for the superposition of stochastic processes that give rise to power-law distribution functions.

  16. Unveiling the curtain of superposition: Recent gedanken and laboratory experiments

    Science.gov (United States)

    Cohen, E.; Elitzur, A. C.

    2017-08-01

    What is the true meaning of quantum superposition? Can a particle genuinely reside in several places simultaneously? These questions lie at the heart of this paper which presents an updated survey of some important stages in the evolution of the three-boxes paradox, as well as novel conclusions drawn from it. We begin with the original thought experiment of Aharonov and Vaidman, and proceed to its non-counterfactual version. The latter was recently realized by Okamoto and Takeuchi using a quantum router. We then outline a dynamic version of this experiment, where a particle is shown to “disappear” and “re-appear” during the time evolution of the system. This surprising prediction based on self-cancellation of weak values is directly related to our notion of Quantum Oblivion. Finally, we present the non-counterfactual version of this disappearing-reappearing experiment. Within the near future, this last version of the experiment is likely to be realized in the lab, proving the existence of exotic hitherto unknown forms of superposition. With the aid of Bell’s theorem, we prove the inherent nonlocality and nontemporality underlying such pre- and post-selected systems, rendering anomalous weak values ontologically real.

  17. Evolution of superpositions of quantum states through a level crossing

    International Nuclear Information System (INIS)

    Torosov, B. T.; Vitanov, N. V.

    2011-01-01

    The Landau-Zener-Stueckelberg-Majorana (LZSM) model is widely used for estimating transition probabilities in the presence of crossing energy levels in quantum physics. This model, however, makes the unphysical assumption of an infinitely long constant interaction, which introduces a divergent phase in the propagator. This divergence remains hidden when estimating output probabilities for a single input state insofar as the divergent phase cancels out. In this paper we show that, because of this divergent phase, the LZSM model is inadequate to describe the evolution of pure or mixed superposition states across a level crossing. The LZSM model can be used only if the system is initially in a single state or in a completely mixed superposition state. To this end, we show that the more realistic Demkov-Kunike model, which assumes a hyperbolic-tangent level crossing and a hyperbolic-secant interaction envelope, is free of divergences and is a much more adequate tool for describing the evolution through a level crossing for an arbitrary input state. For multiple crossing energies which are reducible to one or more effective two-state systems (e.g., by the Majorana and Morris-Shore decompositions), similar conclusions apply: the LZSM model does not produce definite values of the populations and the coherences, and one should use the Demkov-Kunike model instead.

  18. Improving the Yule-Nielsen modified Neugebauer model by dot surface coverages depending on the ink superposition conditions

    Science.gov (United States)

    Hersch, Roger David; Crete, Frederique

    2005-01-01

    Dot gain is different when dots are printed alone, printed in superposition with one ink or printed in superposition with two inks. In addition, the dot gain may also differ depending on which solid ink the considered halftone layer is superposed. In a previous research project, we developed a model for computing the effective surface coverage of a dot according to its superposition conditions. In the present contribution, we improve the Yule-Nielsen modified Neugebauer model by integrating into it our effective dot surface coverage computation model. Calibration of the reproduction curves mapping nominal to effective surface coverages in every superposition condition is carried out by fitting effective dot surfaces which minimize the sum of square differences between the measured reflection density spectra and reflection density spectra predicted according to the Yule-Nielsen modified Neugebauer model. In order to predict the reflection spectrum of a patch, its known nominal surface coverage values are converted into effective coverage values by weighting the contributions from different reproduction curves according to the weights of the contributing superposition conditions. We analyze the colorimetric prediction improvement brought by our extended dot surface coverage model for clustered-dot offset prints, thermal transfer prints and ink-jet prints. The color differences induced by the differences between measured reflection spectra and reflection spectra predicted according to the new dot surface estimation model are quantified on 729 different cyan, magenta, yellow patches covering the full color gamut. As a reference, these differences are also computed for the classical Yule-Nielsen modified spectral Neugebauer model incorporating a single halftone reproduction curve for each ink. Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model. In

  19. Superposition approach for description of electrical conductivity in sheared MWNT/polycarbonate melts

    Directory of Open Access Journals (Sweden)

    M. Saphiannikova

    2012-06-01

    Full Text Available The theoretical description of electrical properties of polymer melts, filled with attractively interacting conductive particles, represents a great challenge. Such filler particles tend to build a network-like structure which is very fragile and can be easily broken in a shear flow with shear rates of about 1 s–1. In this study, measured shear-induced changes in electrical conductivity of polymer composites are described using a superposition approach, in which the filler particles are separated into a highly conductive percolating and low conductive non-percolating phases. The latter is represented by separated well-dispersed filler particles. It is assumed that these phases determine the effective electrical properties of composites through a type of mixing rule involving the phase volume fractions. The conductivity of the percolating phase is described with the help of classical percolation theory, while the conductivity of non-percolating phase is given by the matrix conductivity enhanced by the presence of separate filler particles. The percolation theory is coupled with a kinetic equation for a scalar structural parameter which describes the current state of filler network under particular flow conditions. The superposition approach is applied to transient shear experiments carried out on polycarbonate composites filled with multi-wall carbon nanotubes.

  20. Principle and methods for measurement of snow water equivalent by detection of natural gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Endrestoel, G O [Institutt for Atomenergi, Kjeller (Norway)

    1979-01-01

    The underlying principles for snow cover determination by use of terrestrial gamma radiation are presented. Several of the methods that have been proposed to exploit the effect are discussed, and some of the more important error sources for the different methods are listed. In conclusion an estimate of the precision that can be obtained by these methods is given.

  1. Principle and methods for measurement of snow water equivalent by detection of natural gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Endrestol, G O

    1979-01-01

    The underlying principles for snow cover determination by use of terrestrial gamma radiation are presented. Several of the methods that have been proposed to exploit the effect are discussed, and some of the more important error sources for the different methods are listed. In conclusion estimates of the precision that can be obtained by these methods are given.

  2. Capacity-Approaching Superposition Coding for Optical Fiber Links

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Zibar, Darko; Tafur Monroy, Idelfonso

    2014-01-01

    We report on the first experimental demonstration of superposition coded modulation (SCM) for polarization-multiplexed coherent-detection optical fiber links. The proposed coded modulation scheme is combined with phase-shifted bit-to-symbol mapping (PSM) in order to achieve geometric and passive......-SCM) is employed in the framework of bit-interleaved coded modulation with iterative decoding (BICM-ID) for forward error correction. The fiber transmission system is characterized in terms of signal-to-noise ratio for back-to-back case and correlated with simulated results for ideal transmission over additive...... white Gaussian noise channel. Thereafter, successful demodulation and decoding after dispersion-unmanaged transmission over 240-km standard single mode fiber of dual-polarization 6-Gbaud 16-, 32- and 64-ary SCM-PSM is experimentally demonstrated....

  3. Superposition of Stress Fields in Diametrically Compressed Cylinders

    Directory of Open Access Journals (Sweden)

    João Augusto de Lima Rocha

    Full Text Available Abstract The theoretical analysis for the Brazilian test is a classical plane stress problem of elasticity theory, where a vertical force is applied to a horizontal plane, the boundary of a semi-infinite medium. Hypothesizing a normal radial stress field, the results of that model are correct. Nevertheless, the superposition of three stress fields, with two being based on prior results and the third based on a hydrostatic stress field, is incorrect. Indeed, this work shows that the Cauchy vectors (tractions are non-vanishing in the parallel planes in which the two opposing vertical forces are applied. The aim of this work is to detail the process used in the construction of the theoretical model for the three stress fields used, with the objective being to demonstrate the inconsistency often stated in the literature.

  4. Simulation Analysis of DC and Switching Impulse Superposition Circuit

    Science.gov (United States)

    Zhang, Chenmeng; Xie, Shijun; Zhang, Yu; Mao, Yuxiang

    2018-03-01

    Surge capacitors running between the natural bus and the ground are affected by DC and impulse superposition voltage during operation in the converter station. This paper analyses the simulation aging circuit of surge capacitors by PSCAD electromagnetic transient simulation software. This paper also analyses the effect of the DC voltage to the waveform of the impulse voltage generation. The effect of coupling capacitor to the test voltage waveform is also studied. Testing results prove that the DC voltage has little effect on the waveform of the output of the surge voltage generator, and the value of the coupling capacitor has little effect on the voltage waveform of the sample. Simulation results show that surge capacitor DC and impulse superimposed aging test is feasible.

  5. Polyphony: superposition independent methods for ensemble-based drug discovery.

    Science.gov (United States)

    Pitt, William R; Montalvão, Rinaldo W; Blundell, Tom L

    2014-09-30

    Structure-based drug design is an iterative process, following cycles of structural biology, computer-aided design, synthetic chemistry and bioassay. In favorable circumstances, this process can lead to the structures of hundreds of protein-ligand crystal structures. In addition, molecular dynamics simulations are increasingly being used to further explore the conformational landscape of these complexes. Currently, methods capable of the analysis of ensembles of crystal structures and MD trajectories are limited and usually rely upon least squares superposition of coordinates. Novel methodologies are described for the analysis of multiple structures of a protein. Statistical approaches that rely upon residue equivalence, but not superposition, are developed. Tasks that can be performed include the identification of hinge regions, allosteric conformational changes and transient binding sites. The approaches are tested on crystal structures of CDK2 and other CMGC protein kinases and a simulation of p38α. Known interaction - conformational change relationships are highlighted but also new ones are revealed. A transient but druggable allosteric pocket in CDK2 is predicted to occur under the CMGC insert. Furthermore, an evolutionarily-conserved conformational link from the location of this pocket, via the αEF-αF loop, to phosphorylation sites on the activation loop is discovered. New methodologies are described and validated for the superimposition independent conformational analysis of large collections of structures or simulation snapshots of the same protein. The methodologies are encoded in a Python package called Polyphony, which is released as open source to accompany this paper [http://wrpitt.bitbucket.org/polyphony/].

  6. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  7. A millimeter wave linear superposition oscillator in 0.18 μm CMOS technology

    International Nuclear Information System (INIS)

    Yan Dong; Mao Luhong; Su Qiujie; Xie Sheng; Zhang Shilin

    2014-01-01

    This paper presents a millimeter wave (mm-wave) oscillator that generates signal at 36.56 GHz. The mm-wave oscillator is realized in a UMC 0.18 μm CMOS process. The linear superposition (LS) technique breaks through the limit of cut-off frequency (f T ), and realizes a much higher oscillation than f T . Measurement results show that the LS oscillator produces a calibrated −37.17 dBm output power when biased at 1.8 V; the output power of fundamental signal is −10.85 dBm after calibration. The measured phase noise at 1 MHz frequency offset is −112.54 dBc/Hz at the frequency of 9.14 GHz. This circuit can be properly applied to mm-wave communication systems with advantages of low cost and high integration density. (semiconductor integrated circuits)

  8. Intra-cavity generation of superpositions of Laguerre-Gaussian beams

    CSIR Research Space (South Africa)

    Naidoo, Darryl

    2012-01-01

    Full Text Available In this paper we demonstrate experimentally the intra-cavity generation of a coherent superposition of Laguerre–Gaussian modes of zero radial order but opposite azimuthal order. The superposition is created with a simple intra-cavity stop...

  9. Evaluation of fine ceramics raw powders with particle size analyzers having different measuring principle and its problem

    International Nuclear Information System (INIS)

    Hayakawa, Osamu; Nakahira, Kenji; Tsubaki, Junichiro.

    1995-01-01

    Many kinds of analyzers based on various principles have been developed for measuring particle size distribution of fine ceramics powders. But the reproducibility of the results, interchangeability of the models, reliability of the ends of the measured distribution have not been investigated for each principle. In this paper, these important points for particle size analysis were clarified by measuring raw material powders of fine ceramics. (1) in the case of laser diffraction and scattering method, the reproducibility in the same model is good, however, interchangeability of the different models is not so good, especially at the ends of the distribution. Submicron powders having high refractive index show such a tendency remarkably. (2) the photo sedimentation method has some problems to be conquered, especially in measuring submicron powders having high refractive index or flaky shape particles. The reproducibility of X-ray sedimentation method is much better than that of photo sedimentation. (3) the light obscuration and electrical sensing zone methods, show good reproducibility, however, sometime bad interchangeability is affected by calibration and so on. (author)

  10. Who has to pay for measures in the field of water management? A proposal for applying the polluter pays principle.

    Science.gov (United States)

    Grünebaum, Thomas; Schweder, Heinrich; Weyand, Michael

    2009-01-01

    There is no doubt about the fact that the implementation of the European Water Framework Directive (WFD) and the pursuit of its goal of good ecological status will give rise to measures in different fields of water management. However, a conclusive and transparent method of financing these measures is still missing up to now. Measures in the water management sector are no mere end in themselves; instead, they serve specific ends directed at human activities or they serve general environment objectives. Following the integrative approach of the WFD on looking upon river basins as a whole and its requirement to observe the polluter pays principle, all different groups within a river basin should contribute to the costs according to their cost-bearer roles as polluters, stakeholders with vested interests or beneficiaries via relevant yardsticks. In order to quantify the financial expenditure of each cost bearer, a special algorithm was developed and tested in the river basin of a small tributary of the Ruhr River. It was proved to be generally practicable with regard to its handling and the comprehension of the results. Therefore, the application of a cost bearer system based on the polluter-pays principle and thus in correspondence with the WFD's requirements should appear possible in order to finance future measures.

  11. Improving ability measurement in surveys by following the principles of IRT: The Wordsum vocabulary test in the General Social Survey.

    Science.gov (United States)

    Cor, M Ken; Haertel, Edward; Krosnick, Jon A; Malhotra, Neil

    2012-09-01

    Survey researchers often administer batteries of questions to measure respondents' abilities, but these batteries are not always designed in keeping with the principles of optimal test construction. This paper illustrates one instance in which following these principles can improve a measurement tool used widely in the social and behavioral sciences: the GSS's vocabulary test called "Wordsum". This ten-item test is composed of very difficult items and very easy items, and item response theory (IRT) suggests that the omission of moderately difficult items is likely to have handicapped Wordsum's effectiveness. Analyses of data from national samples of thousands of American adults show that after adding four moderately difficult items to create a 14-item battery, "Wordsumplus" (1) outperformed the original battery in terms of quality indicators suggested by classical test theory; (2) reduced the standard error of IRT ability estimates in the middle of the latent ability dimension; and (3) exhibited higher concurrent validity. These findings show how to improve Wordsum and suggest that analysts should use a score based on all 14 items instead of using the summary score provided by the GSS, which is based on only the original 10 items. These results also show more generally how surveys measuring abilities (and other constructs) can benefit from careful application of insights from the contemporary educational testing literature. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Principles of azimuthal correlation measurement of J/psi with charged hadrons

    CERN Multimedia

    Maire, Antonin

    2012-01-01

    Schematic illustration of measurement variables in the azimuthal J/psi-hadron correlation measurement. The z-axis perpendicular to the x-y-plane corresponds to the beam axis in the experiment. The reconstructed e+e- pairs are only identifiable as J/psi mesons on a statistical basis.

  13. Variational principles for collective motion: Relation between invariance principle of the Schroedinger equation and the trace variational principle

    International Nuclear Information System (INIS)

    Klein, A.; Tanabe, K.

    1984-01-01

    The invariance principle of the Schroedinger equation provides a basis for theories of collective motion with the help of the time-dependent variational principle. It is formulated here with maximum generality, requiring only the motion of intrinsic state in the collective space. Special cases arise when the trial vector is a generalized coherent state and when it is a uniform superposition of collective eigenstates. The latter example yields variational principles uncovered previously only within the framework of the equations of motion method. (orig.)

  14. Design principles for prototype and production magnetic measurements of superconducting magnets

    International Nuclear Information System (INIS)

    Brown, B.C.

    1989-02-01

    The magnetic field strength and shape for SSC superconducting magnets will determine critical properties of the accelerator systems. This paper will enumerate the relations between magnetic field properties and magnet material selection and assembly techniques. Magnitudes of various field errors will be explored along with operating parameters which can affect them. Magnetic field quality requirements will be compared to available measuring techniques and the relation between magnetic field measurements and other quality control efforts will be discussed. This will provide a framework for designing a complete magnet measurement plan for the SSC project. 17 refs., 1 fig., 5 tabs

  15. A convolution-superposition dose calculation engine for GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe [Departement de genie informatique et genie logiciel, Ecole polytechnique de Montreal, 2500 Chemin de Polytechnique, Montreal, Quebec H3T 1J4 (Canada); Departement de radio-oncologie, CRCHUM-Centre hospitalier de l' Universite de Montreal, 1560 rue Sherbrooke Est, Montreal, Quebec H2L 4M1 (Canada)

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  16. Zoo agent's measure in applying the five freedoms principles for animal welfare.

    Science.gov (United States)

    Demartoto, Argyo; Soemanto, Robertus Bellarminus; Zunariyah, Siti

    2017-09-01

    Animal welfare should be prioritized not only for the animal's life sustainability but also for supporting the sustainability of living organism's life on the earth. However, Indonesian people have not understood it yet, thereby still treating animals arbitrarily and not appreciating either domesticated or wild animals. This research aimed to analyze the zoo agent's action in applying the five freedoms principle for animal welfare in Taman Satwa Taru Jurug (thereafter called TSTJ) or Surakarta Zoo and Gembira Loka Zoo (GLZ) of Yogyakarta Indonesia using Giddens structuration theory. The informants in this comparative study with explorative were organizers, visitors, and stakeholders of zoos selected using purposive sampling technique. The informants consisted of 19 persons: 8 from TSTJ (Code T) and 10 from GLZ (Code G) and representatives from Natural Resource Conservation Center of Central Java (Code B). Data were collected through observation, in-depth interview, and Focus Group Discussion and Documentation. Data were analyzed using an interactive model of analysis consisting of three components: Data reduction, data display, and conclusion drawing. Data validation was carried out using method and data source triangulations. Food, nutrition, and nutrition level have been given consistent with the animals' habit and natural behavior. Animal keepers always maintain their self-cleanliness. GLZ has provided cages according to the technical instruction of constructing ideal cages, but the cages in TSTJ are worrying as they are not consistent with standard, rusty, and damaged, and animals have no partner. Some animals in GLZ are often sick, whereas some animals in TSTJ are dead due to poor maintenance. The iron pillars of cages restrict animal behavior in TSTJ so that they have not had freedom to behave normally yet, whereas, in GLZ, they can move freely in original habitat. The animals in the two zoos have not been free from disruption, stress, and pressure due to the

  17. Principles of fuel ion ratio measurements in fusion plasmas by collective Thomson scattering

    DEFF Research Database (Denmark)

    Stejner Pedersen, Morten; Nielsen, Stefan Kragh; Bindslev, Henrik

    2011-01-01

    ratio. Measurements of the fuel ion ratio will be important for plasma control and machine protection in future experiments with burning fusion plasmas. Here we examine the theoretical basis for fuel ion ratio measurements by CTS. We show that the sensitivity to plasma composition is enhanced......For certain scattering geometries collective Thomson scattering (CTS) measurements are sensitive to the composition of magnetically confined fusion plasmas. CTS therefore holds the potential to become a new diagnostic for measurements of the fuel ion ratio—i.e. the tritium to deuterium density...... by the signatures of ion cyclotron motion and ion Bernstein waves which appear for scattering geometries with resolved wave vectors near perpendicular to the magnetic field. We investigate the origin and properties of these features in CTS spectra and give estimates of their relative importance for fuel ion ratio...

  18. PREVENTIVE MEASURES - EXCEPTION TO THE PRINCIPLE OF THE RIGHT TO LIBERTY AND SECURITY

    Directory of Open Access Journals (Sweden)

    Marin-Alin DĂNILĂ

    2016-05-01

    Full Text Available Considering the specific obligations arising from the exercise of criminal action and civil action in criminal proceedings and taking into account the need to ensure a better conduct of activities that are undertaken in solving criminal cases, it sometimes appears necessary, taking certain procedural measures. Procedural measures were defined [1] as institutions available for criminal procedural law and criminal judicial bodies consisting of privations or certain constraints, real or personal, of the conditions and circumstances under which the criminal proceedings are being realized. By the function pursued by the legislature, these measures work as a legal means of prevention or suppression of circumstances or situations likely to jeopardize the effectiveness of the criminal proceedings through the obstacles, difficulties and confusion which they can produce [2]. Procedural measures arise as opportunities, but not being specific to any criminal case, judicial bodies take measures according to the specific circumstances of each criminal case. From this derives the adjacent character of the criminal procedural measures to the main job [3].

  19. THEORETICAL PRINCIPLES OF EVALUATION OF EFFICIENCY OF SOIL CONSERVATION MEASURES IN AGRICULTURAL LAND-USE

    Directory of Open Access Journals (Sweden)

    Shevchenko O.

    2017-08-01

    Full Text Available In the article modern scientific and theoretical positions concerning determination of the effectiveness of soil protection measures on agricultural lands are investigated. It is analyzed that the protection of land from degradation is one of the most important problems of agriculture, as this process leads to a significant decrease in soil fertility and crop yields. That is why in today's conditions, when the protection of agricultural land became urgent and a priority task, the scientific substantiation of the economic assessment of the damage caused by the degradation of land to agriculture, as well as the development of methods for determining the economic efficiency of the most progressive soil protection measures, technologies and complexes based on their overall Comparative evaluation. It was established that ground protection measures are a system of various measures aimed at reducing the negative degradation effect on the soil cover and ensuring the preservation and reproduction of soil fertility and integrity, as well as increasing their productivity as a result of rational use. The economic essence of soil protection measures is the economic effect achieved by preventing damage caused by land degradation to agriculture, as well as for obtaining additional profit as a result of their action. The economic effectiveness of soil protection measures means their effectiveness, that is, the correlation between the results and the costs that they provided. The excess of the economic result over the cost of its achievement indicates the economic efficiency of soil protection measures, and the difference between the result and the expenditure characterizes the economic effect. Ecological efficiency is characterized by environmental parameters of the soil cover, namely: the weakening of degradation effects on soils; improvement of their qualitative properties; An increase in production without violation of environmental standards, etc. Economic

  20. Operational quantities for use in external radiation protection measurements. An investigation of concepts and principles

    International Nuclear Information System (INIS)

    1983-01-01

    Under the terms of the Euratom Treaty the Commission of the European Communities is required to draw up basic standards for the health protection of the general public and workers against the dangers arising from ionizing radiation. The basic standards lay down reference values for particular quantities; these values are required to be measured, and appropriate steps taken to ensure that they are not exceeded. To ensure that the basic standards are applied uniformly in the Member States, it is necessary to harmonize not only national laws but also measurement and recording techniques. As a practical contribution towards this objective, the Commission has since 1964 been conducting intercomparison programmes on operational radiation protection dosimetry. Effective monitoring against the dangers of ionizing radiation cannot be guaranteed unless the measuring instruments meet the necessary requirements, the quantities measured are those for which limit values have been laid down, and the instruments can be calibrated unequivocally. The differences between the concepts of quantity and unit of measurement in radiation protection were often unclear. In the light of developments at international level, the introduction of the international system of units of measurements (SI units) and the contents of ICRP Publication No 26, the services of the European Community responsible for radiation protection decided to review the whole question of quantities. The introduction of the 'index' quantities (absorbed dose index and dose equivalent index) was greeted with initial enthusiasm, but it soon became clear, on closer critical examination, that these too had major shortcomings. The Commission, in collaboration with experts from the Member States of the European Community, has therefore set out in this publication the various considerations and points of view concerning the use of these quantities in practical dosimetry. It is hoped that this publication will be of use to all

  1. Practical purification scheme for decohered coherent-state superpositions via partial homodyne detection

    International Nuclear Information System (INIS)

    Suzuki, Shigenari; Takeoka, Masahiro; Sasaki, Masahide; Andersen, Ulrik L.; Kannari, Fumihiko

    2006-01-01

    We present a simple protocol to purify a coherent-state superposition that has undergone a linear lossy channel. The scheme constitutes only a single beam splitter and a homodyne detector, and thus is experimentally feasible. In practice, a superposition of coherent states is transformed into a classical mixture of coherent states by linear loss, which is usually the dominant decoherence mechanism in optical systems. We also address the possibility of producing a larger amplitude superposition state from decohered states, and show that in most cases the decoherence of the states are amplified along with the amplitude

  2. 40 CFR Appendix A-1 to Part 50 - Reference Measurement Principle and Calibration Procedure for the Measurement of Sulfur Dioxide...

    Science.gov (United States)

    2010-07-01

    ... on a Fluorescence Method”, Journal of the Air Control Pollution Association, vol. 23, p. 514-516... Handbook for Air Pollution Measurement Systems—Volume II. Ambient Air Quality Monitoring Programs. U.S...

  3. Cellular telephones measure activity and lifespace in community-dwelling adults: proof of principle.

    Science.gov (United States)

    Schenk, Ana Katrin; Witbrodt, Bradley C; Hoarty, Carrie A; Carlson, Richard H; Goulding, Evan H; Potter, Jane F; Bonasera, Stephen J

    2011-02-01

    To describe a system that uses off-the-shelf sensor and telecommunication technologies to continuously measure individual lifespace and activity levels in a novel way. Proof of concept involving three field trials of 30, 30, and 21 days. Omaha, Nebraska, metropolitan and surrounding rural region. Three participants (48-year-old man, 33-year-old woman, and 27-year-old male), none with any functional limitations. Cellular telephones were used to detect in-home position and in-community location and to measure physical activity. Within the home, cellular telephones and Bluetooth transmitters (beacons) were used to locate participants at room-level resolution. Outside the home, the same cellular telephones and global positioning system (GPS) technology were used to locate participants at a community-level resolution. Physical activity was simultaneously measured using the cellular telephone accelerometer. This approach had face validity to measure activity and lifespace. More importantly, this system could measure the spatial and temporal organization of these metrics. For example, an individual's lifespace was automatically calculated across multiple time intervals. Behavioral time budgets showing how people allocate time to specific regions within the home were also automatically generated. Mobile monitoring shows much promise as an easily deployed system to quantify activity and lifespace, important indicators of function, in community-dwelling adults. © 2011, Copyright the Authors. Journal compilation © 2011, The American Geriatrics Society.

  4. Routine internal- and external-quality control data in clinical laboratories for estimating measurement and diagnostic uncertainty using GUM principles.

    Science.gov (United States)

    Magnusson, Bertil; Ossowicki, Haakan; Rienitz, Olaf; Theodorsson, Elvar

    2012-05-01

    Healthcare laboratories are increasingly joining into larger laboratory organizations encompassing several physical laboratories. This caters for important new opportunities for re-defining the concept of a 'laboratory' to encompass all laboratories and measurement methods measuring the same measurand for a population of patients. In order to make measurement results, comparable bias should be minimized or eliminated and measurement uncertainty properly evaluated for all methods used for a particular patient population. The measurement as well as diagnostic uncertainty can be evaluated from internal and external quality control results using GUM principles. In this paper the uncertainty evaluations are described in detail using only two main components, within-laboratory reproducibility and uncertainty of the bias component according to a Nordtest guideline. The evaluation is exemplified for the determination of creatinine in serum for a conglomerate of laboratories both expressed in absolute units (μmol/L) and relative (%). An expanded measurement uncertainty of 12 μmol/L associated with concentrations of creatinine below 120 μmol/L and of 10% associated with concentrations above 120 μmol/L was estimated. The diagnostic uncertainty encompasses both measurement uncertainty and biological variation, and can be estimated for a single value and for a difference. This diagnostic uncertainty for the difference for two samples from the same patient was determined to be 14 μmol/L associated with concentrations of creatinine below 100 μmol/L and 14 % associated with concentrations above 100 μmol/L.

  5. Zoo agent's measure in applying the five freedoms principles for animal welfare

    Directory of Open Access Journals (Sweden)

    Argyo Demartoto

    2017-09-01

    Full Text Available Background: Animal welfare should be prioritized not only for the animal's life sustainability but also for supporting the sustainability of living organism's life on the earth. However, Indonesian people have not understood it yet, thereby still treating animals arbitrarily and not appreciating either domesticated or wild animals. Aim: This research aimed to analyze the zoo agent's action in applying the five freedoms principle for animal welfare in Taman Satwa Taru Jurug (thereafter called TSTJ or Surakarta Zoo and Gembira Loka Zoo (GLZ of Yogyakarta Indonesia using Giddens structuration theory. Materials and Methods: The informants in this comparative study with explorative were organizers, visitors, and stakeholders of zoos selected using purposive sampling technique. The informants consisted of 19 persons: 8 from TSTJ (Code T and 10 from GLZ (Code G and representatives from Natural Resource Conservation Center of Central Java (Code B. Data were collected through observation, in-depth interview, and Focus Group Discussion and Documentation. Data were analyzed using an interactive model of analysis consisting of three components: Data reduction, data display, and conclusion drawing. Data validation was carried out using method and data source triangulations. Results: Food, nutrition, and nutrition level have been given consistent with the animals' habit and natural behavior. Animal keepers always maintain their self-cleanliness. GLZ has provided cages according to the technical instruction of constructing ideal cages, but the cages in TSTJ are worrying as they are not consistent with standard, rusty, and damaged, and animals have no partner. Some animals in GLZ are often sick, whereas some animals in TSTJ are dead due to poor maintenance. The iron pillars of cages restrict animal behavior in TSTJ so that they have not had freedom to behave normally yet, whereas, in GLZ, they can move freely in original habitat. The animals in the two zoos

  6. Improved passive flux samplers for measuring ammonia emissions from animal houses, part 1: Basic principles

    NARCIS (Netherlands)

    Scholtens, R.; Hol, J.M.G.; Wagemans, M.J.M.; Phillips, V.R.

    2003-01-01

    At present, precise, expensive and laborious methods with a high resolution in time are needed, to determine ammonia emission rates from animal houses. The high costs for equipment, maintenance and labour limit the number of sites that can be measured. This study examines a new, simpler concept for

  7. Measurement of guided light-mode intensity: An alternative waveguide sensing principle

    DEFF Research Database (Denmark)

    Horvath, R.; Skivesen, N.; Pedersen, H.C.

    2004-01-01

    An alternative transduction mechanism for planar optical waveguide sensors is reported. Based on a simple measurement of the mode intensity, the presented transduction is an interesting alternative to the conventional mode-angle transduction, because the expensive, high-precision angular rotation...

  8. Cosmological principles. II. Physical principles

    International Nuclear Information System (INIS)

    Harrison, E.R.

    1974-01-01

    The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)

  9. Principles and equations for measuring and interpreting protein stability: From monomer to tetramer.

    Science.gov (United States)

    Bedouelle, Hugues

    2016-02-01

    The ability to measure the thermodynamic stability of proteins with precision is important for both academic and applied research. Such measurements rely on mathematical models of the protein denaturation profile, i.e. the relation between a global protein signal, corresponding to the folding states in equilibrium, and the variable value of a denaturing agent, either heat or a chemical molecule, e.g. urea or guanidinium hydrochloride. In turn, such models rely on a handful of physical laws: the laws of mass action and conservation, the law that relates the protein signal and concentration, and the one that relates stability and denaturant value. So far, equations have been derived mainly for the denaturation profiles of homomeric proteins. Here, we review the underlying basic physical laws and show in detail how to derive model equations for the unfolding equilibria of homomeric or heteromeric proteins up to trimers and potentially tetramers, with or without folding intermediates, and give full demonstrations. We show that such equations cannot be derived for pentamers or higher oligomers except in special degenerate cases. We expand the method to signals that do not correspond to extensive protein properties. We review and expand methods for uncovering hidden intermediates of unfolding. Finally, we review methods for comparing and interpreting the thermodynamic parameters that derive from stability measurements for cognate wild-type and mutant proteins. This work should provide a robust theoretical basis for measuring the stability of complex proteins. Copyright © 2015 Elsevier B.V. and Société Française de Biochimie et Biologie Moléculaire (SFBBM). All rights reserved.

  10. Measuring perceptions related to e-cigarettes: Important principles and next steps to enhance study validity.

    Science.gov (United States)

    Gibson, Laura A; Creamer, MeLisa R; Breland, Alison B; Giachello, Aida Luz; Kaufman, Annette; Kong, Grace; Pechacek, Terry F; Pepper, Jessica K; Soule, Eric K; Halpern-Felsher, Bonnie

    2018-04-01

    Measuring perceptions associated with e-cigarette use can provide valuable information to help explain why youth and adults initiate and continue to use e-cigarettes. However, given the complexity of e-cigarette devices and their continuing evolution, measures of perceptions of this product have varied greatly. Our goal, as members of the working group on e-cigarette measurement within the Tobacco Centers of Regulatory Science (TCORS) network, is to provide guidance to researchers developing surveys concerning e-cigarette perceptions. We surveyed the 14 TCORS sites and received and reviewed 371 e-cigarette perception items from seven sites. We categorized the items based on types of perceptions asked, and identified measurement approaches that could enhance data validity and approaches that researchers may consider avoiding. The committee provides suggestions in four areas: (1) perceptions of benefits, (2) harm perceptions, (3) addiction perceptions, and (4) perceptions of social norms. Across these 4 areas, the most appropriate way to assess e-cigarette perceptions depends largely on study aims. The type and number of items used to examine e-cigarette perceptions will also vary depending on respondents' e-cigarette experience (i.e., user vs. non-user), level of experience (e.g., experimental vs. established), type of e-cigarette device (e.g., cig-a-like, mod), and age. Continuous formative work is critical to adequately capture perceptions in response to the rapidly changing e-cigarette landscape. Most important, it is imperative to consider the unique perceptual aspects of e-cigarettes, building on the conventional cigarette literature as appropriate, but not relying on existing conventional cigarette perception items without adjustment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Newly developed hardness testing system, "Cariotester": measurement principles and development of a program for measuring Knoop hardness of carious dentin.

    Science.gov (United States)

    Shimizu, Akihiko; Nakashima, Syozi; Nikaido, Toru; Sugawara, Toyotaro; Yamamoto, Takatsugu; Momoi, Yasuko

    2013-01-01

    We previously discovered that when a cone-shaped indenter coated with paint was pressed into an object, the paint disappeared in accordance with the depth of the indentation. Based on this fact, we developed the Cariotester, a portable system for measuring the Knoop hardness (KHN) of carious dentin. The Cariotester is composed of a handpiece with an indenter, a microscope, and a computer. In this system, the painted indenter is forced into the material with a 150-gf load, and the indentation depth (CT depth) is obtained from the paint disappearance. The CT depth by the Cariotester and the KHN by a microhardness tester were determined at 14 dentin regions. From the data, a program was created to convert the CT depth of the carious dentin into the KHN. As a result, if the CT depth is measured with this system, the KHN of carious dentin can be displayed in real time.

  12. A cute and highly contrast-sensitive superposition eye : The diurnal owlfly Libelloides macaronius

    NARCIS (Netherlands)

    Belušič, Gregor; Pirih, Primož; Stavenga, Doekele G.

    The owlfly Libelloides macaronius (Insecta: Neuroptera) has large bipartite eyes of the superposition type. The spatial resolution and sensitivity of the photoreceptor array in the dorsofrontal eye part was studied with optical and electrophysiological methods. Using structured illumination

  13. Integral superposition of paraxial Gaussian beams in inhomogeneous anisotropic layered structures in Cartesian coordinates

    Czech Academy of Sciences Publication Activity Database

    Červený, V.; Pšenčík, Ivan

    2015-01-01

    Roč. 25, - (2015), s. 109-155 ISSN 2336-3827 Institutional support: RVO:67985530 Keywords : integral superposition of paraxial Gaussian beams * inhomogeneous anisotropic media * S waves in weakly anisotropic media Subject RIV: DC - Siesmology, Volcanology, Earth Structure

  14. On the L-characteristic of nonlinear superposition operators in lp-spaces

    International Nuclear Information System (INIS)

    Dedagic, F.

    1995-04-01

    In this paper we describe the L-characteristic of the nonlinear superposition operator F(x) f(s,x(s)) between two Banach spaces of functions x from N to R. It was shown that L-characteristic of the nonlinear superposition operator which acts between two Lebesgue spaces has so-called Σ-convexity property. In this paper we show that L-characteristic of the operator F (between two Banach spaces) has the convexity property. It means that the classical interpolation theorem of Reisz-Thorin for a linear operator holds for the nonlinear operator superposition which acts between two Banach spaces of sequences. Moreover, we consider the growth function of the operator superposition in mentioned spaces and we show that one has the logarithmically convexity property. (author). 7 refs

  15. Eddy Covariance Method for CO2 Emission Measurements: CCS Applications, Principles, Instrumentation and Software

    Science.gov (United States)

    Burba, George; Madsen, Rod; Feese, Kristin

    2013-04-01

    The Eddy Covariance method is a micrometeorological technique for direct high-speed measurements of the transport of gases, heat, and momentum between the earth's surface and the atmosphere. Gas fluxes, emission and exchange rates are carefully characterized from single-point in-situ measurements using permanent or mobile towers, or moving platforms such as automobiles, helicopters, airplanes, etc. Since the early 1990s, this technique has been widely used by micrometeorologists across the globe for quantifying CO2 emission rates from various natural, urban and agricultural ecosystems [1,2], including areas of agricultural carbon sequestration. Presently, over 600 eddy covariance stations are in operation in over 120 countries. In the last 3-5 years, advancements in instrumentation and software have reached the point when they can be effectively used outside the area of micrometeorology, and can prove valuable for geological carbon capture and sequestration, landfill emission measurements, high-precision agriculture and other non-micrometeorological industrial and regulatory applications. In the field of geological carbon capture and sequestration, the magnitude of CO2 seepage fluxes depends on a variety of factors. Emerging projects utilize eddy covariance measurement to monitor large areas where CO2 may escape from the subsurface, to detect and quantify CO2 leakage, and to assure the efficiency of CO2 geological storage [3,4,5,6,7,8]. Although Eddy Covariance is one of the most direct and defensible ways to measure and calculate turbulent fluxes, the method is mathematically complex, and requires careful setup, execution and data processing tailor-fit to a specific site and a project. With this in mind, step-by-step instructions were created to introduce a novice to the conventional Eddy Covariance technique [9], and to assist in further understanding the method through more advanced references such as graduate-level textbooks, flux networks guidelines, journals

  16. A criticism to the fundamental principles of physics: The problem of the quantum measurement (I)

    International Nuclear Information System (INIS)

    Mormontoy Cardenas, Oscar; Marquez Jacome, Mateo

    2008-01-01

    The wave packet model collapse debt to extremely fast fluctuations of quantum field leads to interpreting the phase speed of the harmonic waves that compose the packet, as the speed of time flux. If it consider that harmonics waves keep different phases, the waves packet scattered almost instantly and, as consequence of that, allows the possibility of the quantum system energy it is measure with exactitude absolute in given time. These results induce to think that the time would being a superforce which would determine finally the events of universe and being responsible of the intrinsic pulsations observable in the physics systems. (author)

  17. The principle of measuring unusual change of underground mass by optical astrometric instrument

    Directory of Open Access Journals (Sweden)

    Wang Jiancheng

    2012-11-01

    In this study, we estimate the deflection angle of the plumb line on a ground site, and give a relation between the angle, abnormal mass and site distance (depth and horizontal distance. Then we derive the abnormality of underground material density using the plumb lines measured at different sites, and study the earthquake gestation, development and occurrence. Using the deflection angles of plumb lines observed at two sites, we give a method to calculate the mass and the center of gravity of underground materials. We also estimate the abnormal masses of latent seismic zones with different energy, using thermodynamic relations, and introduce a new optical astrometric instrument we had developed.

  18. Proof-of-principle measurements for an NDA-based core discharge monitor

    International Nuclear Information System (INIS)

    Halbig, J.K.; Monticone, A.C.

    1990-01-01

    The feasibility of using nondestructive assay instruments as a core discharge monitor for CANDU reactors was investigated at the Ontario Hydro Bruce Nuclear Generating Station A, Unit 3, in Ontario, Canada. The measurements were made to determine if radiation signatures from discharged irradiated fuel could be measured unambiguously and used to count the number of fuel pushes from a reactor face. Detectors using the (γ,n) reaction thresholds of beryllium and deuterium collected the data, but data from shielded and unshielded ion chambers were collected as well. The detectors were placed on a fueling trolley that carried the fueling machine between the reactors and the central service area. A microprocessor-based electronics system (the GRAND-I, which also resided on the trolley) provided detector biases and preamplifier power and acquired and transferred the data. It was connected by an RS-232 serial link to a lap-top computer adjacent to the fueling control console in the main-reactor control room. The lap-top computer collected and archived the data on a 3.5-in. floppy disk. The results clearly showed such an approach to be a adaptable as a core discharge monitor. 4 refs., 8 figs

  19. Radioimmunoassay - renin - angiotensin. Principles of radioimmunoassay and their application in measuring renin and angiotensin

    Energy Technology Data Exchange (ETDEWEB)

    Krause, D K; Hummerich, W; Poulsen, K [eds.

    1978-01-01

    Typical pitfalls such as impurity of 'standard', tracer damage, crossreactivity of antiserum, unspecific binding of protecting proteins, blank effects with negative results, charcoal stripping, invisible coprecipitate or uncertainty in the analysis of the calibration curve (graph, logit-log, polynormal or spline function) can occur in any type of radioimmunoassay; they are detailed in the general part of this book. The special position occupied by radioimmunological quantification of parameters of the renin-angiotensin system creates additional, even more serious problems. While the radioimmunological determination of the decapeptide angiotensin I no longer causes major obstacles, measurement of the biologically active octapeptide angiotensin II is still only possible in a few centers. The (indirect) determination of plasma renin is characterized by a situation where the enzyme renin may be clearly defined in theory as a specific 10-11-leucine-leucine-endopeptidase cleaving only a decapeptide, but the actual renin assay, however, measures various forms of renin and other angiotensin-forming (or angiotensin-destroying) enzymes at the same time.

  20. Evaluation of screen-film system quality: physical principles and measurement techniques

    International Nuclear Information System (INIS)

    Borasi, G.; Berardi, P.; Ferretti, P.P.; Piccagli, V.

    1990-01-01

    Comparative evaluation of radiographic filmscreen systems presents several problems from both the theoretical and the experimental point of view. From the theoretical point of view the main difficulties are related to the choice of the parameters best suited to express the 'overall quality' of a system. From the practical point of view the main problem is that to measure some basic quantities (resolution and noise) sophisticated and expensive instruments are required. This paper deals with both these problems. To express image quality we have assumed the signal-to-noise power ration: this index depends in a explicit way on contrast, resolution and noise of the system. The dependence on sensitivity is implicit and was derived using literature data. From a knowledge of the dependence of image quality on sensitivity it is possible to develop an 'overall quality' index which is considered to express the 'technological level' of the systems. In this work some basic phisical quantities (characteristic curve, sensitivity) were evaluated using standard instruments. To measure spatial resolution and noise an inexpensive, PC-based, TV-digitizer system was developed. As an example, both image and overall quality indices were evaluated on three mammographic systems which are typical of the three different 'phases' of the development of this technique

  1. Investigation of the dual-gauge principle for eliminating measurement interference in nuclear density and moisture gauges

    International Nuclear Information System (INIS)

    Dunn, W.L.

    1974-07-01

    The development of mathematical models for an application of the dual-gauge principle to surface neutron moisture content gauges were made under an Agency co-ordinated research programme. The response of a detector (such as a BF 3 proportional counter) to low-energy neutrons is dependent on the hydrogen present in the sample in the form of water. Other factors which affect the gauge response are sample density, composition (particularly with regard to the presence of strong thermal neutron absorbers), and bound hydrogen content. In this work mathematical models for epicadmium and bare BF 3 detector response have been developed for surface neutron moisture content gauges. These models are based on epithermal and thermal line and area flux models obtained from Diffusion Theory and Transport Theory, where flux as a function of radial distance, r, from the source is phi(r), line flux ∫ phi (r) dr, and area flux is ∫ phi (r)rdr. All models have been checked by calculation and comparison to experimental results except for the Transport Theory thermal flux models. The computer calculations were made on an IBM 370/165 system. In addition, the dual-gauge principle was applied and demonstrated as a means of minimizing the composition measurement interference

  2. Experimental Demonstration of Capacity-Achieving Phase-Shifted Superposition Modulation

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Zibar, Darko; Caballero Jambrina, Antonio

    2013-01-01

    We report on the first experimental demonstration of phase-shifted superposition modulation (PSM) for optical links. Successful demodulation and decoding is obtained after 240 km transmission for 16-, 32- and 64-PSM.......We report on the first experimental demonstration of phase-shifted superposition modulation (PSM) for optical links. Successful demodulation and decoding is obtained after 240 km transmission for 16-, 32- and 64-PSM....

  3. Resilience to decoherence of the macroscopic quantum superpositions generated by universally covariant optimal quantum cloning

    International Nuclear Information System (INIS)

    Spagnolo, Nicolo; Sciarrino, Fabio; De Martini, Francesco

    2010-01-01

    We show that the quantum states generated by universal optimal quantum cloning of a single photon represent a universal set of quantum superpositions resilient to decoherence. We adopt the Bures distance as a tool to investigate the persistence of quantum coherence of these quantum states. According to this analysis, the process of universal cloning realizes a class of quantum superpositions that exhibits a covariance property in lossy configuration over the complete set of polarization states in the Bloch sphere.

  4. A Tutorial on Basic Principles of Microwave Reflectometry Applied to Fluctuation Measurements in Fusion Plasmas

    International Nuclear Information System (INIS)

    Nazikian, R.; Kramer, G.J.; Valeo, E.

    2001-01-01

    Microwave reflectometry is now routinely used for probing the structure of magnetohydrodynamic and turbulent fluctuations in fusion plasmas. Conditions specific to the core of tokamak plasmas, such as small amplitude of density irregularities and the uniformity of the background plasma, have enabled progress in the quantitative interpretation of reflectometer signals. In particular, the extent of applicability of the 1-D [one-dimensional] geometric optics description of the reflected field is investigated by direct comparison to 1-D full wave analysis. Significant advances in laboratory experiments are discussed which are paving the way towards a thorough understanding of this important measurement technique. Data is presented from the Tokamak Fusion Test Reactor [R. Hawryluk, Plasma Physics and Controlled Fusion 33 (1991) 1509] identifying the validity of the geometric optics description of the scattered field and demonstrating the feasibility of imaging turbulent fluctuations in fusion scale devices

  5. Solid phase stability of molybdenum under compression: Sound velocity measurements and first-principles calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiulu [Laboratory for Shock Wave and Detonation Physics Research, Institute of Fluid Physics, P.O. Box 919-102, 621900 Mianyang, Sichuan (China); Laboratory for Extreme Conditions Matter Properties, Southwest University of Science and Technology, 621010 Mianyang, Sichuan (China); Liu, Zhongli [Laboratory for Shock Wave and Detonation Physics Research, Institute of Fluid Physics, P.O. Box 919-102, 621900 Mianyang, Sichuan (China); College of Physics and Electric Information, Luoyang Normal University, 471022 Luoyang, Henan (China); Jin, Ke; Xi, Feng; Yu, Yuying; Tan, Ye; Dai, Chengda; Cai, Lingcang [Laboratory for Shock Wave and Detonation Physics Research, Institute of Fluid Physics, P.O. Box 919-102, 621900 Mianyang, Sichuan (China)

    2015-02-07

    The high-pressure solid phase stability of molybdenum (Mo) has been the center of a long-standing controversy on its high-pressure melting. In this work, experimental and theoretical researches have been conducted to check its solid phase stability under compression. First, we performed sound velocity measurements from 38 to 160 GPa using the two-stage light gas gun and explosive loading in backward- and forward-impact geometries, along with the high-precision velocity interferometry. From the sound velocities, we found no solid-solid phase transition in Mo before shock melting, which does not support the previous solid-solid phase transition conclusion inferred from the sharp drops of the longitudinal sound velocity [Hixson et al., Phys. Rev. Lett. 62, 637 (1989)]. Then, we searched its structures globally using the multi-algorithm collaborative crystal structure prediction technique combined with the density functional theory. By comparing the enthalpies of body centered cubic structure with those of the metastable structures, we found that bcc is the most stable structure in the range of 0–300 GPa. The present theoretical results together with previous ones greatly support our experimental conclusions.

  6. Measurement of cerebral blood flow by single photon emission tomography: principles and application to functional studies of the language areas

    International Nuclear Information System (INIS)

    Tran Dinh, Y.R.; Seylaz, J.

    1989-01-01

    Quantitative measurement of cerebral blood flow by single photon emission computerized tomography (SPECT) is a new technique which is particularly suitable for routine studies of cerebro-vascular diseases. SPECT can be used to examine the deep structures of the brain and cerebellum. The functional areas of the brain, which have hitherto been only accessible by clinical-anatomical methods, can be imaged by this technique, based on the correlation between cerebral blood flow and metabolism. The demonstration of preferential activation of temporal and frontal zones in the left hemisphere by active speech stimulation confirms the general principles of hemispheric lateralization of cerebral functions. In addition to this role in studying the physiology of normal subjects, the technique has practical pathological applications. Knowledge of hemispheric lateralization of spoken language should be a pre-operative test for cerebral lesion when there is a risk that surgical intervention may produce irreversible neuropsychological lesions [fr

  7. Principles of electromigration measurements

    International Nuclear Information System (INIS)

    Roesch, F.

    1988-01-01

    Basing on experimental literature data, obtained by means of different analytical techniques, ratios between individual ion mobilities of central and complex ions in equilibrium reactions in aqueous solutions (hydrolysis, protonisation, complex formation) are discussed. The data pairs are compared introducing normalized individual ion mobilities of the complex species in respect to the individual ion mobilities of the corresponding central ions. The central ion itselfs is considered with the normated individual ion mobility of value 1. The correlations give evidence to proportionalities of individual ion mobilities of complex and central ions according to the ratios of its charges. Some practical and theoretical aspects of the correlations are discussed. (author)

  8. Effect of the superposition of a dielectric barrier discharge onto a premixed gas burner flame

    Science.gov (United States)

    Zaima, Kazunori; Takada, Noriharu; Sasaki, Koichi

    2011-10-01

    We are investigating combustion control with the help of nonequilibrium plasma. In this work, we examined the effect of dielectric barrier discharge (DBD) on a premixed burner flame with CH4/O2/Ar gas mixture. The premixed burner flame was covered with a quartz tube. A copper electrode was attached on the outside of the quartz tube, and it was connected to a high-voltage power supply. DBD inside the quartz tube was obtained between the copper electrode and the grounded nozzle of the burner which was placed at the bottom of the quartz tube. We clearly observed that the flame length was shortened by superposing DBD onto the bottom part of the flame. The shortened flame length indicates the enhancement of the burning velocity. We measured the optical emission spectra from the bottom region of the flame. As a result, we observed clear line emissions from Ar, which were never observed from the flame without DBD. We evaluated the rotational temperatures of OH and CH radicals by spectral fitting. As a result, the rotational temperature of CH was not changed, and the rotational temperature of OH was decreased by the superposition of DBD. According to these results, it is considered that the enhancement of the burning velocity is not caused by gas heating. New reaction pathways are suggested.

  9. Probing the conductance superposition law in single-molecule circuits with parallel paths.

    Science.gov (United States)

    Vazquez, H; Skouta, R; Schneebeli, S; Kamenetska, M; Breslow, R; Venkataraman, L; Hybertsen, M S

    2012-10-01

    According to Kirchhoff's circuit laws, the net conductance of two parallel components in an electronic circuit is the sum of the individual conductances. However, when the circuit dimensions are comparable to the electronic phase coherence length, quantum interference effects play a critical role, as exemplified by the Aharonov-Bohm effect in metal rings. At the molecular scale, interference effects dramatically reduce the electron transfer rate through a meta-connected benzene ring when compared with a para-connected benzene ring. For longer conjugated and cross-conjugated molecules, destructive interference effects have been observed in the tunnelling conductance through molecular junctions. Here, we investigate the conductance superposition law for parallel components in single-molecule circuits, particularly the role of interference. We synthesize a series of molecular systems that contain either one backbone or two backbones in parallel, bonded together cofacially by a common linker on each end. Single-molecule conductance measurements and transport calculations based on density functional theory show that the conductance of a double-backbone molecular junction can be more than twice that of a single-backbone junction, providing clear evidence for constructive interference.

  10. Basic principles

    International Nuclear Information System (INIS)

    Wilson, P.D.

    1996-01-01

    Some basic explanations are given of the principles underlying the nuclear fuel cycle, starting with the physics of atomic and nuclear structure and continuing with nuclear energy and reactors, fuel and waste management and finally a discussion of economics and the future. An important aspect of the fuel cycle concerns the possibility of ''closing the back end'' i.e. reprocessing the waste or unused fuel in order to re-use it in reactors of various kinds. The alternative, the ''oncethrough'' cycle, discards the discharged fuel completely. An interim measure involves the prolonged storage of highly radioactive waste fuel. (UK)

  11. Bernoulli's Principle

    Science.gov (United States)

    Hewitt, Paul G.

    2004-01-01

    Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…

  12. About simple nonlinear and linear superpositions of special exact solutions of Veselov-Novikov equation

    International Nuclear Information System (INIS)

    Dubrovsky, V. G.; Topovsky, A. V.

    2013-01-01

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u (n) , n= 1, …, N are constructed via Zakharov and Manakov ∂-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u (n) and calculated by ∂-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schrödinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u (n) . It is shown that the sums u=u (k 1 ) +...+u (k m ) , 1 ⩽k 1 2 m ⩽N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schrödinger equation and can serve as model potentials for electrons in planar structures of modern electronics.

  13. About simple nonlinear and linear superpositions of special exact solutions of Veselov-Novikov equation

    Energy Technology Data Exchange (ETDEWEB)

    Dubrovsky, V. G.; Topovsky, A. V. [Novosibirsk State Technical University, Karl Marx prosp. 20, Novosibirsk 630092 (Russian Federation)

    2013-03-15

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.

  14. Variability of residual stresses and superposition effect in multipass grinding of high-carbon high-chromium steel

    Science.gov (United States)

    Karabelchtchikova, Olga; Rivero, Iris V.

    2005-02-01

    The distribution of residual stresses (RS) and surface integrity generated in heat treatment and subsequent multipass grinding was investigated in this experimental study to examine the source of variability and the nature of the interactions of the experimental factors. A nested experimental design was implemented to (a) compare the sources of the RS variability, (b) to examine RS distribution and tensile peak location due to experimental factors, and (c) to analyze the superposition relationship in the RS distribution due to multipass grinding technique. To characterize the material responses, several techniques were used, including microstructural analysis, hardness-toughness and roughness examinations, and retained austenite and RS measurements using x-ray diffraction. The causality of the RS was explained through the strong correlation of the surface integrity characteristics and RS patterns. The main sources of variation were the depth of the RS distribution and the multipass grinding technique. The grinding effect on the RS was statistically significant; however, it was mostly predetermined by the preexisting RS induced in heat treatment. Regardless of the preceding treatments, the effect of the multipass grinding technique exhibited similar RS patterns, which suggests the existence of the superposition relationship and orthogonal memory between the passes of the grinding operation.

  15. Relaxation Behavior by Time-Salt and Time-Temperature Superpositions of Polyelectrolyte Complexes from Coacervate to Precipitate

    Directory of Open Access Journals (Sweden)

    Samim Ali

    2018-01-01

    Full Text Available Complexation between anionic and cationic polyelectrolytes results in solid-like precipitates or liquid-like coacervate depending on the added salt in the aqueous medium. However, the boundary between these polymer-rich phases is quite broad and the associated changes in the polymer relaxation in the complexes across the transition regime are poorly understood. In this work, the relaxation dynamics of complexes across this transition is probed over a wide timescale by measuring viscoelastic spectra and zero-shear viscosities at varying temperatures and salt concentrations for two different salt types. We find that the complexes exhibit time-temperature superposition (TTS at all salt concentrations, while the range of overlapped-frequencies for time-temperature-salt superposition (TTSS strongly depends on the salt concentration (Cs and gradually shifts to higher frequencies as Cs is decreased. The sticky-Rouse model describes the relaxation behavior at all Cs. However, collective relaxation of polyelectrolyte complexes gradually approaches a rubbery regime and eventually exhibits a gel-like response as Cs is decreased and limits the validity of TTSS.

  16. Towards quantum superposition of a levitated nanodiamond with a NV center

    Science.gov (United States)

    Li, Tongcang

    2015-05-01

    Creating large Schrödinger's cat states with massive objects is one of the most challenging goals in quantum mechanics. We have previously achieved an important step of this goal by cooling the center-of-mass motion of a levitated microsphere from room temperature to millikelvin temperatures with feedback cooling. To generate spatial quantum superposition states with an optical cavity, however, requires a very strong quadratic coupling that is difficult to achieve. We proposed to optically trap a nanodiamond with a nitrogen-vacancy (NV) center in vacuum, and generate large spatial superposition states using the NV spin-optomechanical coupling in a strong magnetic gradient field. The large spatial superposition states can be used to study objective collapse theories of quantum mechanics. We have optically trapped nanodiamonds in air and are working towards this goal.

  17. Use of the modal superposition technique for piping system blowdown analyses

    International Nuclear Information System (INIS)

    Ware, A.G.; Macek, R.W.

    1983-01-01

    A standard method of solving for the seismic response of piping systems is the modal superposition technique. Only a limited number of structural modes are considered (typically those up to 33 Hz in the U.S.), since the effect on the calculated response due to higher modes is generally small, and the method can result in considerable computer cost savings over the direct integration method. The modal superposition technique has also been applied to piping response problems in which the forcing functions are due to fluid excitation. Application of the technique to this case is somewhat more difficult, because a well defined cutoff frequency for determining structural modes to be included has not been established. This paper outlines a method for higher mode corrections, and suggests methods to determine suitable cutoff frequencies for piping system blowdown analyses. A numerical example illustrates how uncorrected modal superposition results can produce erroneous stress results

  18. Logarithmic superposition of force response with rapid length changes in relaxed porcine airway smooth muscle.

    Science.gov (United States)

    Ijpma, G; Al-Jumaily, A M; Cairns, S P; Sieck, G C

    2010-12-01

    We present a systematic quantitative analysis of power-law force relaxation and investigate logarithmic superposition of force response in relaxed porcine airway smooth muscle (ASM) strips in vitro. The term logarithmic superposition describes linear superposition on a logarithmic scale, which is equivalent to multiplication on a linear scale. Additionally, we examine whether the dynamic response of contracted and relaxed muscles is dominated by cross-bridge cycling or passive dynamics. The study shows the following main findings. For relaxed ASM, the force response to length steps of varying amplitude (0.25-4% of reference length, both lengthening and shortening) are well-fitted with power-law functions over several decades of time (10⁻² to 10³ s), and the force response after consecutive length changes is more accurately fitted assuming logarithmic superposition rather than linear superposition. Furthermore, for sinusoidal length oscillations in contracted and relaxed muscles, increasing the oscillation amplitude induces greater hysteresivity and asymmetry of force-length relationships, whereas increasing the frequency dampens hysteresivity but increases asymmetry. We conclude that logarithmic superposition is an important feature of relaxed ASM, which may facilitate a more accurate prediction of force responses in the continuous dynamic environment of the respiratory system. In addition, the single power-function response to length changes shows that the dynamics of cross-bridge cycling can be ignored in relaxed muscle. The similarity in response between relaxed and contracted states implies that the investigated passive dynamics play an important role in both states and should be taken into account.

  19. Are electrostatic potentials between regions of different chemical composition measurable? The Gibbs-Guggenheim Principle reconsidered, extended and its consequences revisited.

    Science.gov (United States)

    Pethica, Brian A

    2007-12-21

    As indicated by Gibbs and made explicit by Guggenheim, the electrical potential difference between two regions of different chemical composition cannot be measured. The Gibbs-Guggenheim Principle restricts the use of classical electrostatics in electrochemical theories as thermodynamically unsound with some few approximate exceptions, notably for dilute electrolyte solutions and concomitant low potentials where the linear limit for the exponential of the relevant Boltzmann distribution applies. The Principle invalidates the widespread use of forms of the Poisson-Boltzmann equation which do not include the non-electrostatic components of the chemical potentials of the ions. From a thermodynamic analysis of the parallel plate electrical condenser, employing only measurable electrical quantities and taking into account the chemical potentials of the components of the dielectric and their adsorption at the surfaces of the condenser plates, an experimental procedure to provide exceptions to the Principle has been proposed. This procedure is now reconsidered and rejected. No other related experimental procedures circumvent the Principle. Widely-used theoretical descriptions of electrolyte solutions, charged surfaces and colloid dispersions which neglect the Principle are briefly discussed. MD methods avoid the limitations of the Poisson-Bolzmann equation. Theoretical models which include the non-electrostatic components of the inter-ion and ion-surface interactions in solutions and colloid systems assume the additivity of dispersion and electrostatic forces. An experimental procedure to test this assumption is identified from the thermodynamics of condensers at microscopic plate separations. The available experimental data from Kelvin probe studies are preliminary, but tend against additivity. A corollary to the Gibbs-Guggenheim Principle is enunciated, and the Principle is restated that for any charged species, neither the difference in electrostatic potential nor the

  20. On the superposition of bedforms in a tidal channel

    DEFF Research Database (Denmark)

    Winter, C; Vittori, G.; Ernstsen, V.B.

    2008-01-01

    High resolution bathymetric measurements reveal the super-imposition of bedforms in the Grådyb tidal inlet in the Danish Wadden Sea. Preliminary results of numerical model simulations are discussed: A linear stability model was tested to explain the large bedforms as being caused by tidal system ...

  1. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    Science.gov (United States)

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  2. Superpositions of higher-order bessel beams and nondiffracting speckle fields

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-08-01

    Full Text Available speckle fields. The paper reports on illuminating a ring slit aperture with light which has an azimuthal phase dependence, such that the field produced is a superposition of two higher-order Bessel beams. In the case that the phase dependence of the light...

  3. Superposition of Planckian spectra and the distortions of the cosmic microwave background radiation

    International Nuclear Information System (INIS)

    Alexanian, M.

    1982-01-01

    A fit of the spectrum of the cosmic microwave background radiation (CMB) by means of a positive linear superposition of Planckian spectra implies an upper bound to the photon spectrum. The observed spectrum of the CMB gives a weighting function with a normalization greater than unity

  4. Teleportation of a Superposition of Three Orthogonal States of an Atom via Photon Interference

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shi-Biao

    2006-01-01

    We propose a scheme to teleport a superposition of three states of an atom trapped in a cavity to a second atom trapped in a remote cavity. The scheme is based on the detection of photons leaking from the cavities after the atom-cavity interaction.

  5. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  6. Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis

    Science.gov (United States)

    LoPresto, Michael C.

    2013-01-01

    What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.

  7. Superpositions of higher-order bessel beams and nondiffracting speckle fields - (SAIP 2009)

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-07-01

    Full Text Available speckle fields. The paper reports on illuminating a ring slit aperture with light which has an azimuthal phase dependence, such that the field produced is a superposition of two higher-order Bessel beams. In the case that the phase dependence of the light...

  8. Measuring coherence with entanglement concurrence

    Science.gov (United States)

    Qi, Xianfei; Gao, Ting; Yan, Fengli

    2017-07-01

    Quantum coherence is a fundamental manifestation of the quantum superposition principle. Recently, Baumgratz et al (2014 Phys. Rev. Lett. 113 140401) presented a rigorous framework to quantify coherence from the view of theory of physical resource. Here we propose a new valid quantum coherence measure which is a convex roof measure, for a quantum system of arbitrary dimension, essentially using the generalized Gell-Mann matrices. Rigorous proof shows that the proposed coherence measure, coherence concurrence, fulfills all the requirements dictated by the resource theory of quantum coherence measures. Moreover, strong links between the resource frameworks of coherence concurrence and entanglement concurrence is derived, which shows that any degree of coherence with respect to some reference basis can be converted to entanglement via incoherent operations. Our work provides a clear quantitative and operational connection between coherence and entanglement based on two kinds of concurrence. This new coherence measure, coherence concurrence, may also be beneficial to the study of quantum coherence.

  9. Noise-based logic hyperspace with the superposition of 2N states in a single wire

    International Nuclear Information System (INIS)

    Kish, Laszlo B.; Khatri, Sunil; Sethuraman, Swaminathan

    2009-01-01

    In the introductory paper [L.B. Kish, Phys. Lett. A 373 (2009) 911], about noise-based logic, we showed how simple superpositions of single logic basis vectors can be achieved in a single wire. The superposition components were the N orthogonal logic basis vectors. Supposing that the different logic values have 'on/off' states only, the resultant discrete superposition state represents a single number with N bit accuracy in a single wire, where N is the number of orthogonal logic vectors in the base. In the present Letter, we show that the logic hyperspace (product) vectors defined in the introductory paper can be generalized to provide the discrete superposition of 2 N orthogonal system states. This is equivalent to a multi-valued logic system with 2 2 N logic values per wire. This is a similar situation to quantum informatics with N qubits, and hence we introduce the notion of noise-bit. This system has major differences compared to quantum informatics. The noise-based logic system is deterministic and each superposition element is instantly accessible with the high digital accuracy, via a real hardware parallelism, without decoherence and error correction, and without the requirement of repeating the logic operation many times to extract the probabilistic information. Moreover, the states in noise-based logic do not have to be normalized, and non-unitary operations can also be used. As an example, we introduce a string search algorithm which is O(√(M)) times faster than Grover's quantum algorithm (where M is the number of string entries), while it has the same hardware complexity class as the quantum algorithm.

  10. Noise-based logic hyperspace with the superposition of 2 states in a single wire

    Science.gov (United States)

    Kish, Laszlo B.; Khatri, Sunil; Sethuraman, Swaminathan

    2009-05-01

    In the introductory paper [L.B. Kish, Phys. Lett. A 373 (2009) 911], about noise-based logic, we showed how simple superpositions of single logic basis vectors can be achieved in a single wire. The superposition components were the N orthogonal logic basis vectors. Supposing that the different logic values have “on/off” states only, the resultant discrete superposition state represents a single number with N bit accuracy in a single wire, where N is the number of orthogonal logic vectors in the base. In the present Letter, we show that the logic hyperspace (product) vectors defined in the introductory paper can be generalized to provide the discrete superposition of 2 orthogonal system states. This is equivalent to a multi-valued logic system with 2 logic values per wire. This is a similar situation to quantum informatics with N qubits, and hence we introduce the notion of noise-bit. This system has major differences compared to quantum informatics. The noise-based logic system is deterministic and each superposition element is instantly accessible with the high digital accuracy, via a real hardware parallelism, without decoherence and error correction, and without the requirement of repeating the logic operation many times to extract the probabilistic information. Moreover, the states in noise-based logic do not have to be normalized, and non-unitary operations can also be used. As an example, we introduce a string search algorithm which is O(√{M}) times faster than Grover's quantum algorithm (where M is the number of string entries), while it has the same hardware complexity class as the quantum algorithm.

  11. Estimating Concentrations of Road-Salt Constituents in Highway-Runoff from Measurements of Specific Conductance

    Science.gov (United States)

    Granato, Gregory E.; Smith, Kirk P.

    1999-01-01

    Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for

  12. Quantum teleportation of an arbitrary superposition of atomic states

    Institute of Scientific and Technical Information of China (English)

    Chen Qiong; Fang Xi-Ming

    2008-01-01

    This paper proposes a scheme to teleport an arbitrary multi-particle two-level atomic state between two parties or an arbitrary zero- and one-photon entangled state of multi-mode between two high-Q cavities in cavity QED.This scheme is based on the resonant interaction between atom and cavity and does not involve Bell-state measurement.It investigates the fidelity of this scheme and find out the case of this unity fidelity of this teleportation.Considering the practical case of the cavity decay,this paper finds that the condition of the unity fidelity is also valid and obtains the effect of the decay of the cavity on the successful probability of the teleportation.

  13. Parameter-free resolution of the superposition of stochastic signals

    Energy Technology Data Exchange (ETDEWEB)

    Scholz, Teresa, E-mail: tascholz@fc.ul.pt [Center for Theoretical and Computational Physics, University of Lisbon (Portugal); Raischel, Frank [Center for Geophysics, IDL, University of Lisbon (Portugal); Closer Consulting, Av. Eng. Duarte Pacheco Torre 1 15" 0, 1070-101 Lisboa (Portugal); Lopes, Vitor V. [DEIO-CIO, University of Lisbon (Portugal); UTEC–Universidad de Ingeniería y Tecnología, Lima (Peru); Lehle, Bernd; Wächter, Matthias; Peinke, Joachim [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Lind, Pedro G. [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Institute of Physics, University of Osnabrück, Osnabrück (Germany)

    2017-01-30

    This paper presents a direct method to obtain the deterministic and stochastic contribution of the sum of two independent stochastic processes, one of which is an Ornstein–Uhlenbeck process and the other a general (non-linear) Langevin process. The method is able to distinguish between the stochastic processes, retrieving their corresponding stochastic evolution equations. This framework is based on a recent approach for the analysis of multidimensional Langevin-type stochastic processes in the presence of strong measurement (or observational) noise, which is here extended to impose neither constraints nor parameters and extract all coefficients directly from the empirical data sets. Using synthetic data, it is shown that the method yields satisfactory results.

  14. Availability of Care Concordant With Patient-centered Medical Home Principles Among Those With Chronic Conditions: Measuring Care Outcomes.

    Science.gov (United States)

    Pourat, Nadereh; Charles, Shana A; Snyder, Sophie

    2016-03-01

    Care delivery redesign in the form of patient-centered medical home (PCMH) is considered as a potential solution to improve patient outcomes and reduce costs, particularly for patients with chronic conditions. But studies of prevalence or impact at the population level are rare. We aimed to assess whether desired outcomes indicating better care delivery and patient-centeredness were associated with receipt of care according to 3 important PCMH principles. We analyzed data from a representative population survey in California in 2009, focusing on a population with chronic condition who had a usual source of care. We used bivariate, logistic, and negative-binomial regressions. The indicators of PCMH concordant care included continuity of care (personal doctor), care coordination, and care management (individual treatment plan). Outcomes included flu shots, count of outpatient visits, any emergency department visit, timely provider communication, and confidence in self-care. We found that patients whose care was concordant with all 3 PCMH principles were more likely to receive flu shots, more outpatient care, and timely response from providers. Concordance with 2 principles led to some desired outcomes. Concordance with only 1 principle was not associated with desired outcomes. Patients who received care that met 3 key aspects of PCMH: coordination, continuity, and management, had better quality of care and more efficient use of the health care system.

  15. Nonclassical thermal-state superpositions: Analytical evolution law and decoherence behavior

    Science.gov (United States)

    Meng, Xiang-guo; Goan, Hsi-Sheng; Wang, Ji-suo; Zhang, Ran

    2018-03-01

    Employing the integration technique within normal products of bosonic operators, we present normal product representations of thermal-state superpositions and investigate their nonclassical features, such as quadrature squeezing, sub-Poissonian distribution, and partial negativity of the Wigner function. We also analytically and numerically investigate their evolution law and decoherence characteristics in an amplitude-decay model via the variations of the probability distributions and the negative volumes of Wigner functions in phase space. The results indicate that the evolution formulas of two thermal component states for amplitude decay can be viewed as the same integral form as a displaced thermal state ρ(V , d) , but governed by the combined action of photon loss and thermal noise. In addition, the larger values of the displacement d and noise V lead to faster decoherence for thermal-state superpositions.

  16. A numerical dressing method for the nonlinear superposition of solutions of the KdV equation

    International Nuclear Information System (INIS)

    Trogdon, Thomas; Deconinck, Bernard

    2014-01-01

    In this paper we present the unification of two existing numerical methods for the construction of solutions of the Korteweg–de Vries (KdV) equation. The first method is used to solve the Cauchy initial-value problem on the line for rapidly decaying initial data. The second method is used to compute finite-genus solutions of the KdV equation. The combination of these numerical methods allows for the computation of exact solutions that are asymptotically (quasi-)periodic finite-gap solutions and are a nonlinear superposition of dispersive, soliton and (quasi-)periodic solutions in the finite (x, t)-plane. Such solutions are referred to as superposition solutions. We compute these solutions accurately for all values of x and t. (paper)

  17. Optical threshold secret sharing scheme based on basic vector operations and coherence superposition

    Science.gov (United States)

    Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen

    2015-04-01

    We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.

  18. Analysis of magnetic damping problem by the coupled mode superposition method

    International Nuclear Information System (INIS)

    Horie, Tomoyoshi; Niho, Tomoya

    1997-01-01

    In this paper we describe the coupled mode superposition method for the magnetic damping problem, which is produced by the coupled effect between the deformation and the induced eddy current of the structures for future fusion reactors and magnetically levitated vehicles. The formulation of the coupled mode superposition method is based on the matrix equation for the eddy current and the structure using the coupled mode vectors. Symmetric form of the coupled matrix equation is obtained. Coupled problems of a thin plate are solved to verify the formulation and the computer code. These problems are solved efficiently by this method using only a few coupled modes. Consideration of the coupled mode vectors shows that the coupled effects are included completely in each coupled mode. (author)

  19. Exponential Communication Complexity Advantage from Quantum Superposition of the Direction of Communication

    Science.gov (United States)

    Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav

    2016-09-01

    In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.

  20. Optical information encryption based on incoherent superposition with the help of the QR code

    Science.gov (United States)

    Qin, Yi; Gong, Qiong

    2014-01-01

    In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.

  1. On Multiple Users Scheduling Using Superposition Coding over Rayleigh Fading Channels

    KAUST Repository

    Zafar, Ammar

    2013-02-20

    In this letter, numerical results are provided to analyze the gains of multiple users scheduling via superposition coding with successive interference cancellation in comparison with the conventional single user scheduling in Rayleigh blockfading broadcast channels. The information-theoretic optimal power, rate and decoding order allocation for the superposition coding scheme are considered and the corresponding histogram for the optimal number of scheduled users is evaluated. Results show that at optimality there is a high probability that only two or three users are scheduled per channel transmission block. Numerical results for the gains of multiple users scheduling in terms of the long term throughput under hard and proportional fairness as well as for fixed merit weights for the users are also provided. These results show that the performance gain of multiple users scheduling over single user scheduling increases when the total number of users in the network increases, and it can exceed 10% for high number of users

  2. Babinet's principle in double-refraction systems

    Science.gov (United States)

    Ropars, Guy; Le Floch, Albert

    2014-06-01

    Babinet's principle applied to systems with double refraction is shown to involve spatial interchanges between the ordinary and extraordinary patterns observed through two complementary screens. As in the case of metamaterials, the extraordinary beam does not follow the Snell-Descartes refraction law, the superposition principle has to be applied simultaneously at two points. Surprisingly, by contrast to the intuitive impression, in the presence of the screen with an opaque region, we observe that the emerging extraordinary photon pattern, which however has undergone a deviation, remains fixed when a natural birefringent crystal is rotated while the ordinary one rotates with the crystal. The twofold application of Babinet's principle implies intensity and polarization interchanges but also spatial and dynamic interchanges which should occur in birefringent metamaterials.

  3. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters.

    Science.gov (United States)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-21

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  4. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters

    Science.gov (United States)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-01

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  5. Seismic analysis of structures of nuclear power plants by Lanczos mode superposition method

    International Nuclear Information System (INIS)

    Coutinho, A.L.G.A.; Alves, J.L.D.; Landau, L.; Lima, E.C.P. de; Ebecken, N.F.F.

    1986-01-01

    The Lanczos Mode Superposition Method is applied in the seismic analysis of nuclear power plants. The coordinate transformation matrix is generated by the Lanczos algorithm. It is shown that, through a convenient choice of the starting vector of the algorithm, modes with participation factors are automatically selected. It is performed the Response Spectra analysis of a typical reactor building. The obtained results are compared with those determined by the classical aproach stressing the remarkable computer effectiveness of the proposed methodology. (Author) [pt

  6. Joint formation of dissimilar steels in pressure welding with superposition of ultrasonic oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Surovtsev, A P; Golovanenko, S A; Sukhanov, V E; Kazantsev, V F

    1983-12-01

    Investigation results of kinetics and quality of carbon steel joints with the steel 12Kh18N10T, obtained by pressure welding with superposition of ultrasonic oscillations with the frequency 16.5-18.0 kHz are given. The effect of ultrasonic oscillations on the process of physical contact development of the surfaces welded, formation of microstructure and impact viscosity of the compound, is shown.

  7. Sagnac interferometry with coherent vortex superposition states in exciton-polariton condensates

    Science.gov (United States)

    Moxley, Frederick Ira; Dowling, Jonathan P.; Dai, Weizhong; Byrnes, Tim

    2016-05-01

    We investigate prospects of using counter-rotating vortex superposition states in nonequilibrium exciton-polariton Bose-Einstein condensates for the purposes of Sagnac interferometry. We first investigate the stability of vortex-antivortex superposition states, and show that they survive at steady state in a variety of configurations. Counter-rotating vortex superpositions are of potential interest to gyroscope and seismometer applications for detecting rotations. Methods of improving the sensitivity are investigated by targeting high momentum states via metastable condensation, and the application of periodic lattices. The sensitivity of the polariton gyroscope is compared to its optical and atomic counterparts. Due to the large interferometer areas in optical systems and small de Broglie wavelengths for atomic BECs, the sensitivity per detected photon is found to be considerably less for the polariton gyroscope than with competing methods. However, polariton gyroscopes have an advantage over atomic BECs in a high signal-to-noise ratio, and have other practical advantages such as room-temperature operation, area independence, and robust design. We estimate that the final sensitivities including signal-to-noise aspects are competitive with existing methods.

  8. Nuclear grade cable thermal life model by time temperature superposition algorithm based on Matlab GUI

    International Nuclear Information System (INIS)

    Lu Yanyun; Gu Shenjie; Lou Tianyang

    2014-01-01

    Background: As nuclear grade cable must endure harsh environment within design life, it is critical to predict cable thermal life accurately owing to thermal aging, which is one of dominant factors of aging mechanism. Purpose: Using time temperature superposition (TTS) method, the aim is to construct nuclear grade cable thermal life model, predict cable residual life and develop life model interactive interface under Matlab GUI. Methods: According to TTS, nuclear grade cable thermal life model can be constructed by shifting data groups at various temperatures to preset reference temperature with translation factor which is determined by non linear programming optimization. Interactive interface of cable thermal life model developed under Matlab GUI consists of superposition mode and standard mode which include features such as optimization of translation factor, calculation of activation energy, construction of thermal aging curve and analysis of aging mechanism., Results: With calculation result comparison between superposition and standard method, the result with TTS has better accuracy than that with standard method. Furthermore, confidence level of nuclear grade cable thermal life with TTS is higher than that with standard method. Conclusion: The results show that TTS methodology is applicable to thermal life prediction of nuclear grade cable. Interactive Interface under Matlab GUI achieves anticipated functionalities. (authors)

  9. Superposition of configurations in semiempirical calculation of iron group ion spectra

    International Nuclear Information System (INIS)

    Kantseryavichyus, A.Yu.; Ramonas, A.A.

    1976-01-01

    The energy spectra of ions from the iron group in the dsup(N), dsup(N)s, dsup(N)p configurations are studied. A semiempirical method is used in which the effective hamiltonian contains configuration superposition. The sdsup(N+1), psup(4)dsup(N+2) quasidegenerated configurations, as well as configurations which differ by one electron are taken as correction configurations. It follows from the calculations that the most important role among the quasidegenerate configurations is played by the sdsup(N+1) correctional configuration. When it is taken into account, the introduction of the psup(4)dsup(N+2) correctional configuration practically does not affect the results. Account of the dsup(N-1)s configuration in the second order of the perturbation theory is equivalent to that of sdsup(N+1) in the sense that it results in the identical mean square deviation. As follows from the comparison of the results of the approximate and complete account of the configuration superposition, in many cases one can be satisfied with its approximate and complete account of the configuration superposition, in many cases one can be satisfied with its approximate version. The results are presented in the form of tables including the values of empirical parameters, radial integrals, mean square errors, etc

  10. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.

    2013-01-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time

  11. Partial Measurements and the Realization of Quantum-Mechanical Counterfactuals

    Science.gov (United States)

    Paraoanu, G. S.

    2011-07-01

    We propose partial measurements as a conceptual tool to understand how to operate with counterfactual claims in quantum physics. Indeed, unlike standard von Neumann measurements, partial measurements can be reversed probabilistically. We first analyze the consequences of this rather unusual feature for the principle of superposition, for the complementarity principle, and for the issue of hidden variables. Then we move on to exploring non-local contexts, by reformulating the EPR paradox, the quantum teleportation experiment, and the entanglement-swapping protocol for the situation in which one uses partial measurements followed by their stochastic reversal. This leads to a number of counter-intuitive results, which are shown to be resolved if we give up the idea of attributing reality to the wavefunction of a single quantum system.

  12. Evaluation of collapsed cone convolution superposition (CCCS algorithms in prowess treatment planning system for calculating symmetric and asymmetric field size

    Directory of Open Access Journals (Sweden)

    Tamer Dawod

    2015-01-01

    Full Text Available Purpose: This work investigated the accuracy of prowess treatment planning system (TPS in dose calculation in a homogenous phantom for symmetric and asymmetric field sizes using collapse cone convolution / superposition algorithm (CCCS. Methods: The measurements were carried out at source-to-surface distance (SSD set to 100 cm for 6 and 10 MV photon beams. Data for a full set of measurements for symmetric fields and asymmetric fields, including inplane and crossplane profiles at various depths and percentage depth doses (PDDs were obtained during measurements on the linear accelerator.Results: The results showed that the asymmetric collimation dose lead to significant errors (up to approximately 7% in dose calculations if changes in primary beam intensity and beam quality. It is obvious that the most difference in the isodose curves was found in buildup and the penumbra regions. Conclusion: The results showed that the dose calculation using Prowess TPS based on CCCS algorithm is generally in excellent agreement with measurements.

  13. Pyridinium based ionic liquids. N-Butyl-3-methyl-pyridinium dicyanoamide: Thermochemical measurement and first-principles calculations

    International Nuclear Information System (INIS)

    Emel'yanenko, Vladimir N.; Verevkin, Sergey P.; Heintz, Andreas

    2011-01-01

    The standard molar enthalpy of formation Δ f H m o (l) of the ionic liquid N-butyl-3-methylpyridinium dicyanamide has been determined at 298.15 K by means of combustion calorimetry. Vaporization of the ionic liquid into the nitrogen stream in order to obtain vaporization enthalpy has been attempted, but no vaporization was achieved. First-principles calculations of the enthalpy of formation in the gaseous phase have been performed for the ionic species using the G3MP2 theory. The combination of traditional combustion calorimetry with modern high-level quantum-chemical calculations allows estimation of the molar enthalpy of vaporization of the ionic liquid under study.

  14. Primary standards for measuring flow rates from 100 nl/min to 1 ml/min - gravimetric principle.

    Science.gov (United States)

    Bissig, Hugo; Petter, Harm Tido; Lucas, Peter; Batista, Elsa; Filipe, Eduarda; Almeida, Nelson; Ribeiro, Luis Filipe; Gala, João; Martins, Rui; Savanier, Benoit; Ogheard, Florestan; Niemann, Anders Koustrup; Lötters, Joost; Sparreboom, Wouter

    2015-08-01

    Microflow and nanoflow rate calibrations are important in several applications such as liquid chromatography, (scaled-down) process technology, and special health-care applications. However, traceability in the microflow and nanoflow range does not go below 16 μl/min in Europe. Furthermore, the European metrology organization EURAMET did not yet validate this traceability by means of an intercomparison between different National Metrology Institutes (NMIs). The NMIs METAS, Centre Technique des Industries Aérauliques et Thermiques, IPQ, Danish Technological Institute, and VSL have therefore developed and validated primary standards to cover the flow rate range from 0.1 μl/min to at least 1 ml/min. In this article, we describe the different designs and methods of the primary standards of the gravimetric principle and the results obtained at the intercomparison for the upper flow rate range for the various NMIs and Bronkhorst High-Tech, the manufacturer of the transfer standards used.

  15. Transient change in the shape of premixed burner flame with the superposition of pulsed dielectric barrier discharge

    OpenAIRE

    Zaima, Kazunori; Sasaki, Koichi

    2016-01-01

    We investigated the transient phenomena in a premixed burner flame with the superposition of a pulsed dielectric barrier discharge (DBD). The length of the flame was shortened by the superposition of DBD, indicating the activation of combustion chemical reactions with the help of the plasma. In addition, we observed the modulation of the top position of the unburned gas region and the formations of local minimums in the axial distribution of the optical emission intensity of OH. These experim...

  16. The equivalence principle

    International Nuclear Information System (INIS)

    Smorodinskij, Ya.A.

    1980-01-01

    The prerelativistic history of the equivalence principle (EP) is presented briefly. Its role in history of the general relativity theory (G.R.T.) discovery is elucidated. A modern idea states that the ratio of inert and gravitational masses does not differ from 1 at least up to the 12 sign after comma. Attention is paid to the difference of the gravitational field from electromagnetic one. The difference is as follows, the energy of the gravitational field distributed in space is the source of the field. These fields always interact at superposition. Electromagnetic fields from different sources are put together. On the basis of EP it is established the Sun field interact with the Earth gravitational energy in the same way as with any other one. The latter proves the existence of gravitation of the very gravitational field to a heavy body. A problem on gyroscope movement in the Earth gravitational field is presented as a paradox. The calculation has shown that gyroscope at satellite makes a positive precession, and its axis turns in an angle equal to α during a turn of the satellite round the Earth, but because of the space curvature - into the angle two times larger than α. A resulting turn is equal to 3α. It is shown on the EP basis that the polarization plane in any coordinate system does not turn when the ray of light passes in the gravitational field. Together with the historical value of EP noted is the necessity to take into account the requirements claimed by the EP at description of the physical world

  17. Measuring the orbital angular momentum density for a superposition of Bessel beams

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-01-01

    Full Text Available and amplitude gratings,? Opt. Commun. 284(1), 48?51 (2011). [8] Mair, A., Vaziri, A., Weihs, G., and Zeilinger, A., ?Entanglement of the orbital angular momentum states of photons,? Nature 412(6844), 313?316 (2001). [9] Gibson, G., Courtial, J., Padgett, M...

  18. Fundamental principles of a new EM tool for in-situ resistivity measurement. 2; Denji yudoho ni yoru gen`ichi hiteiko sokutei sochi no kento. 2

    Energy Technology Data Exchange (ETDEWEB)

    Noguchi, K; Aoki, H [Waseda University, Tokyo (Japan). School of Science and Engineering; Saito, A [Mitsui Mineral Development Engineering Co. Ltd., Tokyo (Japan)

    1997-10-22

    In-situ resistivity measuring devices are tested for performance in relation to the principle of focusing. After numerical calculation, it is shown that in the absence of focusing the primary magnetic field will prevail and that changes in the separate-mode component will be difficult to detect in actual measurement because the in-phase component assumes a value far larger than the out-of-phase component. Concerning the transmission loop radius, the study reveals that a larger radius will yield a stronger response and that such will remove the influence of near-surface layers. Two types of devices are constructed, one applying the principle of focusing and the other not, and both are activated to measure the response from a saline solution medium. The results are compared and it is found that focusing eliminates the influence of the primary magnetic field and that it enables the measurement of changes in resistivity of the medium which cannot be detected in the absence of focusing. 3 refs., 9 figs.

  19. Fundamental Principle for Quantum Theory

    OpenAIRE

    Khrennikov, Andrei

    2002-01-01

    We propose the principle, the law of statistical balance for basic physical observables, which specifies quantum statistical theory among all other statistical theories of measurements. It seems that this principle might play in quantum theory the role that is similar to the role of Einstein's relativity principle.

  20. Study on principle and method of measuring system for external dimensions, geometric density and appearance quality of uranium dioxide pellet

    International Nuclear Information System (INIS)

    Cao Wei; Deng Hua; Wang Tao

    2010-01-01

    To adapt to the need of nuclear power development, and keep in step with the increasingly growing nuclear fuel element production, a special measuring system for integrated measuring, calculation, data processing method of External Dimensions, Tolerance of figure and place, Geometric Density and Appearance Quality of Uranium Dioxide Pellet is studied and discussed. This system is with important guiding significance for the improvement of technologic and frocking level.. The measuring system is primarily applied to sampling test during production and is the same with several types of products.The successful application of this measuring method ensures the accuracy and reliability of measured data, reduces the artificial error and makes the measuring be move convenient and fast, thus achieves high precision and high efficiency of measuring process. The measuring method is approach the advanced world level of measuring method at the same industry. So, based on the product inspection requirement, using special measuring instrument and computer data processing system is an important approach we use for nonce and future. (authors)

  1. Variational principles

    CERN Document Server

    Moiseiwitsch, B L

    2004-01-01

    This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha

  2. Imidazolium based ionic liquids. 1-Ethanol-3-methyl-imidazolium dicyanoamide: Thermochemical measurement and first-principles calculations

    International Nuclear Information System (INIS)

    Emel'yanenko, Vladimir N.; Zaitsau, Dzmitry H.; Verevkin, Sergey P.; Heintz, Andreas

    2011-01-01

    Highlights: → We studied the ionic liquid 1-ethanol-3-methylimidazolium dicyanamide. → Combustion calorimetry was used to derive enthalpy of formation in the liquid state. → Composite G3(MP2) method used to compute enthalpy of formation in the gaseous phase. → Enthalpy of vaporization was derived as the difference. → The liquid phase enthalpy of formation presumably obey the group additivity rules. - Abstract: The standard molar enthalpy of formation Δ f H m o (l) of the ionic liquid 1-ethanol-3-methylimidazolium dicyanamide has been determined at 298.15 K by means of combustion calorimetry. First-principles calculations of the enthalpy of formation in the gaseous phase have been performed for the ionic species using the composite G3(MP2) method. The combination of combustion calorimetry with the high-level quantum-chemical calculations allows to estimate the molar enthalpy of vaporization of the ionic liquid under study. It has been established, that the liquid phase enthalpy of formation of this ionic liquid presumably obeys the group additivity rules.

  3. Principle and application of low energy inverse photoemission spectroscopy: A new method for measuring unoccupied states of organic semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Hiroyuki, E-mail: hyoshida@chiba-u.jp

    2015-10-01

    Highlights: • Principle of low energy inverse photoemission spectroscopy is described. • Instruments including electron sources and photon detectors are shown. • Recent results about organic devices and fundamental studies are reviewed. • Electron affinities of typical organic semiconductors are compiled. - Abstract: Information about the unoccupied states is crucial to both fundamental and applied physics of organic semiconductors. However, there were no available experimental methods that meet the requirement of such research. In this review, we describe a new experimental method to examine the unoccupied states, called low-energy inverse photoemission spectroscopy (LEIPS). An electron having the kinetic energy lower than the damage threshold of organic molecules is introduced to a sample film, and an emitted photon in the near-ultraviolet range is detected with high resolution and sensitivity. Unlike the previous inverse photoemission spectroscopy, the sample damage is negligible and the overall resolution is a factor of two improved to 0.25 eV. Using LEIPS, electron affinity of organic semiconductor can be determined with the same precision as photoemission spectroscopy for ionization energy. The instruments including an electron source and photon detectors as well as application to organic semiconductors are presented.

  4. Principle and application of low energy inverse photoemission spectroscopy: A new method for measuring unoccupied states of organic semiconductors

    International Nuclear Information System (INIS)

    Yoshida, Hiroyuki

    2015-01-01

    Highlights: • Principle of low energy inverse photoemission spectroscopy is described. • Instruments including electron sources and photon detectors are shown. • Recent results about organic devices and fundamental studies are reviewed. • Electron affinities of typical organic semiconductors are compiled. - Abstract: Information about the unoccupied states is crucial to both fundamental and applied physics of organic semiconductors. However, there were no available experimental methods that meet the requirement of such research. In this review, we describe a new experimental method to examine the unoccupied states, called low-energy inverse photoemission spectroscopy (LEIPS). An electron having the kinetic energy lower than the damage threshold of organic molecules is introduced to a sample film, and an emitted photon in the near-ultraviolet range is detected with high resolution and sensitivity. Unlike the previous inverse photoemission spectroscopy, the sample damage is negligible and the overall resolution is a factor of two improved to 0.25 eV. Using LEIPS, electron affinity of organic semiconductor can be determined with the same precision as photoemission spectroscopy for ionization energy. The instruments including an electron source and photon detectors as well as application to organic semiconductors are presented.

  5. Remark on Heisenberg's principle

    International Nuclear Information System (INIS)

    Noguez, G.

    1988-01-01

    Application of Heisenberg's principle to inertial frame transformations allows a distinction between three commutative groups of reciprocal transformations along one direction: Galilean transformations, dual transformations, and Lorentz transformations. These are three conjugate groups and for a given direction, the related commutators are all proportional to one single conjugation transformation which compensates for uniform and rectilinear motions. The three transformation groups correspond to three complementary ways of measuring space-time as a whole. Heisenberg's Principle then gets another explanation [fr

  6. Nucleus-nucleus collision as superposition of nucleon-nucleus collisions

    International Nuclear Information System (INIS)

    Orlova, G.I.; Adamovich, M.I.; Aggarwal, M.M.

    1999-01-01

    Angular distributions of charged particles produced in 16 O and 32 S collisions with nuclear track emulsion were studied at momenta 4.5 and 200 A GeV/c. Comparison with the angular distributions of charged particles produced in proton-nucleus collisions at the same momentum allows to draw the conclusion, that the angular distributions in nucleus-nucleus collisions can be seen as superposition of the angular distributions in nucleon-nucleus collisions taken at the same impact parameter b NA , that is mean impact parameter between the participating projectile nucleons and the center of the target nucleus. (orig.)

  7. Constructing petal modes from the coherent superposition of Laguerre-Gaussian modes

    Science.gov (United States)

    Naidoo, Darryl; Forbes, Andrew; Ait-Ameur, Kamel; Brunel, Marc

    2011-03-01

    An experimental approach in generating Petal-like transverse modes, which are similar to what is seen in porro-prism resonators, has been successfully demonstrated. We hypothesize that the petal-like structures are generated from a coherent superposition of Laguerre-Gaussian modes of zero radial order and opposite azimuthal order. To verify this hypothesis, visually based comparisons such as petal peak to peak diameter and the angle between adjacent petals are drawn between experimental data and simulated data. The beam quality factor of the Petal-like transverse modes and an inner product interaction is also experimentally compared to numerical results.

  8. Experimental generation and application of the superposition of higher-order Bessel beams

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-07-01

    Full Text Available Academy of Sciences of Belarus 4 School of Physics, University of Stellenbosch Presented at the 2009 South African Institute of Physics Annual Conference University of KwaZulu-Natal Durban, South Africa 6-10 July 2009 Page 2 © CSIR 2008... www.csir.co.za Generation of Bessel Fields: • METHOD 1: Ring Slit Aperture • METHOD 2: Axicon Adaptation of method 1 to produce superpositions of higher-order Bessel beams: J. Durnin, J.J. Miceli and J.H. Eberly, Phys. Rev. Lett. 58 1499...

  9. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-01-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  10. Nucleus-Nucleus Collision as Superposition of Nucleon-Nucleus Collisions

    International Nuclear Information System (INIS)

    Orlova, G.I.; Adamovich, M.I.; Aggarwal, M.M.; Alexandrov, Y.A.; Andreeva, N.P.; Badyal, S.K.; Basova, E.S.; Bhalla, K.B.; Bhasin, A.; Bhatia, V.S.; Bradnova, V.; Bubnov, V.I.; Cai, X.; Chasnikov, I.Y.; Chen, G.M.; Chernova, L.P.; Chernyavsky, M.M.; Dhamija, S.; Chenawi, K.El; Felea, D.; Feng, S.Q.; Gaitinov, A.S.; Ganssauge, E.R.; Garpman, S.; Gerassimov, S.G.; Gheata, A.; Gheata, M.; Grote, J.; Gulamov, K.G.; Gupta, S.K.; Gupta, V.K.; Henjes, U.; Jakobsson, B.; Kanygina, E.K.; Karabova, M.; Kharlamov, S.P.; Kovalenko, A.D.; Krasnov, S.A.; Kumar, V.; Larionova, V.G.; Li, Y.X.; Liu, L.S.; Lokanathan, S.; Lord, J.J.; Lukicheva, N.S.; Lu, Y.; Luo, S.B.; Mangotra, L.K.; Manhas, I.; Mittra, I.S.; Musaeva, A.K.; Nasyrov, S.Z.; Navotny, V.S.; Nystrand, J.; Otterlund, I.; Peresadko, N.G.; Qian, W.Y.; Qin, Y.M.; Raniwala, R.; Rao, N.K.; Roeper, M.; Rusakova, V.V.; Saidkhanov, N.; Salmanova, N.A.; Seitimbetov, A.M.; Sethi, R.; Singh, B.; Skelding, D.; Soderstrem, K.; Stenlund, E.; Svechnikova, L.N.; Svensson, T.; Tawfik, A.M.; Tothova, M.; Tretyakova, M.I.; Trofimova, T.P.; Tuleeva, U.I.; Vashisht, Vani; Vokal, S.; Vrlakova, J.; Wang, H.Q.; Wang, X.R.; Weng, Z.Q.; Wilkes, R.J.; Yang, C.B.; Yin, Z.B.; Yu, L.Z.; Zhang, D.H.; Zheng, P.Y.; Zhokhova, S.I.; Zhou, D.C.

    1999-01-01

    Angular distributions of charged particles produced in 16 O and 32 S collisions with nuclear track emulsion were studied at momenta 4.5 and 200 A GeV/c. Comparison with the angular distributions of charged particles produced in proton-nucleus collisions at the same momentum allows to draw the conclusion, that the angular distributions in nucleus-nucleus collisions can be seen as superposition of the angular distributions in nucleon-nucleus collisions taken at the same impact parameter b NA , that is mean impact parameter between the participating projectile nucleons and the center of the target nucleus

  11. Nucleus-Nucleus Collision as Superposition of Nucleon-Nucleus Collisions

    Energy Technology Data Exchange (ETDEWEB)

    Orlova, G I; Adamovich, M I; Aggarwal, M M; Alexandrov, Y A; Andreeva, N P; Badyal, S K; Basova, E S; Bhalla, K B; Bhasin, A; Bhatia, V S; Bradnova, V; Bubnov, V I; Cai, X; Chasnikov, I Y; Chen, G M; Chernova, L P; Chernyavsky, M M; Dhamija, S; Chenawi, K El; Felea, D; Feng, S Q; Gaitinov, A S; Ganssauge, E R; Garpman, S; Gerassimov, S G; Gheata, A; Gheata, M; Grote, J; Gulamov, K G; Gupta, S K; Gupta, V K; Henjes, U; Jakobsson, B; Kanygina, E K; Karabova, M; Kharlamov, S P; Kovalenko, A D; Krasnov, S A; Kumar, V; Larionova, V G; Li, Y X; Liu, L S; Lokanathan, S; Lord, J J; Lukicheva, N S; Lu, Y; Luo, S B; Mangotra, L K; Manhas, I; Mittra, I S; Musaeva, A K; Nasyrov, S Z; Navotny, V S; Nystrand, J; Otterlund, I; Peresadko, N G; Qian, W Y; Qin, Y M; Raniwala, R; Rao, N K; Roeper, M; Rusakova, V V; Saidkhanov, N; Salmanova, N A; Seitimbetov, A M; Sethi, R; Singh, B; Skelding, D; Soderstrem, K; Stenlund, E; Svechnikova, L N; Svensson, T; Tawfik, A M; Tothova, M; Tretyakova, M I; Trofimova, T P; Tuleeva, U I; Vashisht, Vani; Vokal, S; Vrlakova, J; Wang, H Q; Wang, X R; Weng, Z Q; Wilkes, R J; Yang, C B; Yin, Z B; Yu, L Z; Zhang, D H; Zheng, P Y; Zhokhova, S I; Zhou, D C

    1999-03-01

    Angular distributions of charged particles produced in {sup 16}O and {sup 32}S collisions with nuclear track emulsion were studied at momenta 4.5 and 200 A GeV/c. Comparison with the angular distributions of charged particles produced in proton-nucleus collisions at the same momentum allows to draw the conclusion, that the angular distributions in nucleus-nucleus collisions can be seen as superposition of the angular distributions in nucleon-nucleus collisions taken at the same impact parameter b{sub NA}, that is mean impact parameter between the participating projectile nucleons and the center of the target nucleus.

  12. Double-contrast examination of the gastric antrum without Duodenal superposition

    International Nuclear Information System (INIS)

    Treugut, H.; Isper, J.

    1980-01-01

    By using a modified technique of double-contrast examination of the stomach it was possible in 75% to perform a study without superposition of the duodenum and jejunum on the distal stomach compared to 36% with the usual method. In this technique a small amount (50 ml) of Barium-suspension is given to the patient in left decubitus position by a straw or gastric tube after antiperistaltic medication. There was no difference in the quality of mucosa-coating compared to the technique using higher volumes of Barium. (orig.) [de

  13. Teleportation of a Coherent Superposition State Via a nonmaximally Entangled Coherent Xhannel

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    @@ We investigate the problemm of teleportation of a superposition coherent state with nonmaximally entangled coherent channel. Two strategies are considered to complete the task. The first one uses entanglement concentration to purify the channel to a maximally entangled one. The second one teleports the state through the nonmaximally entangled coherent channel directly. We find that the probabilities of successful teleportations for the two strategies are depend on the amplitudes of the coherent states and the mean fidelity of teleportation using the first strategy is always less than that of the second strategy.

  14. Relativistic Inverse Scattering Problem for a Superposition of a Nonlocal Separable and a Local Quasipotential

    International Nuclear Information System (INIS)

    Chernichenko, Yu.D.

    2005-01-01

    Within the relativistic quasipotential approach to quantum field theory, the relativistic inverse scattering problem is solved for the case where the total quasipotential describing the interaction of two relativistic spinless particles having different masses is a superposition of a nonlocal separable and a local quasipotential. It is assumed that the local component of the total quasipotential is known and that there exist bound states in this local component. It is shown that the nonlocal separable component of the total interaction can be reconstructed provided that the local component, an increment of the phase shift, and the energies of bound states are known

  15. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-11-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  16. Multiparticle quantum superposition and stimulated entanglement by parity selective amplification of entangled states

    International Nuclear Information System (INIS)

    Martini, F. de; Giuseppe, G. di

    2001-01-01

    A multiparticle quantum superposition state has been generated by a novel phase-selective parametric amplifier of an entangled two-photon state. This realization is expected to open a new field of investigations on the persistence of the validity of the standard quantum theory for systems of increasing complexity, in a quasi decoherence-free environment. Because of its nonlocal structure the new system is expected to play a relevant role in the modern endeavor on quantum information and in the basic physics of entanglement. (orig.)

  17. Scattering of an attractive Bose-Einstein condensate from a barrier: Formation of quantum superposition states

    International Nuclear Information System (INIS)

    Streltsov, Alexej I.; Alon, Ofir E.; Cederbaum, Lorenz S.

    2009-01-01

    Scattering in one dimension of an attractive ultracold bosonic cloud from a barrier can lead to the formation of two nonoverlapping clouds. Once formed, the clouds travel with constant velocity, in general different in magnitude from that of the incoming cloud, and do not disperse. The phenomenon and its mechanism - transformation of kinetic energy to internal energy of the scattered cloud - are obtained by solving the time-dependent many-boson Schroedinger equation. The analysis of the wave function shows that the object formed corresponds to a quantum superposition state of two distinct wave packets traveling through real space.

  18. Decoherence, environment-induced superselection, and classicality of a macroscopic quantum superposition generated by quantum cloning

    International Nuclear Information System (INIS)

    De Martini, Francesco; Sciarrino, Fabio; Spagnolo, Nicolo

    2009-01-01

    The high resilience to decoherence shown by a recently discovered macroscopic quantum superposition (MQS) generated by a quantum-injected optical parametric amplifier and involving a number of photons in excess of 5x10 4 motivates the present theoretical and numerical investigation. The results are analyzed in comparison with the properties of the MQS based on |α> and N-photon maximally entangled states (NOON), in the perspective of the comprehensive theory of the subject by Zurek. In that perspective the concepts of 'pointer state' and 'environment-induced superselection' are applied to the new scheme.

  19. Safety Principles

    Directory of Open Access Journals (Sweden)

    V. A. Grinenko

    2011-06-01

    Full Text Available The offered material in the article is picked up so that the reader could have a complete representation about concept “safety”, intrinsic characteristics and formalization possibilities. Principles and possible strategy of safety are considered. A material of the article is destined for the experts who are taking up the problems of safety.

  20. Maquet principle

    Energy Technology Data Exchange (ETDEWEB)

    Levine, R.B.; Stassi, J.; Karasick, D.

    1985-04-01

    Anterior displacement of the tibial tubercle is a well-accepted orthopedic procedure in the treatment of certain patellofemoral disorders. The radiologic appearance of surgical procedures utilizing the Maquet principle has not been described in the radiologic literature. Familiarity with the physiologic and biochemical basis for the procedure and its postoperative appearance is necessary for appropriate roentgenographic evaluation and the radiographic recognition of complications.

  1. Cosmological principle

    International Nuclear Information System (INIS)

    Wesson, P.S.

    1979-01-01

    The Cosmological Principle states: the universe looks the same to all observers regardless of where they are located. To most astronomers today the Cosmological Principle means the universe looks the same to all observers because density of the galaxies is the same in all places. A new Cosmological Principle is proposed. It is called the Dimensional Cosmological Principle. It uses the properties of matter in the universe: density (rho), pressure (p), and mass (m) within some region of space of length (l). The laws of physics require incorporation of constants for gravity (G) and the speed of light (C). After combining the six parameters into dimensionless numbers, the best choices are: 8πGl 2 rho/c 2 , 8πGl 2 rho/c 4 , and 2 Gm/c 2 l (the Schwarzchild factor). The Dimensional Cosmological Principal came about because old ideas conflicted with the rapidly-growing body of observational evidence indicating that galaxies in the universe have a clumpy rather than uniform distribution

  2. The sensor on the principle of ionization chamber for the measurement of biological objects and of their mutual interactions

    International Nuclear Information System (INIS)

    Komarek, K.; Chrapan, J.; Herec, I.; Bucka, P.

    2012-01-01

    In the contribution the sensor for measuring biological objects 'Auro-Graph' is described, which was suggested and designed for measuring the expressions of human's aura. From the physical point of view the aura is a field with electrical charge in the surroundings of biological as well as non-biological object, whose expressions are measured by known interactions of electrical and magnetically field. It is a field with electrical field in the human's surrounding, where atoms of surroundings are being excited by operation of biopotential (authors)

  3. Automatic superposition of drug molecules based on their common receptor site

    Science.gov (United States)

    Kato, Yuichi; Inoue, Atsushi; Yamada, Miho; Tomioka, Nobuo; Itai, Akiko

    1992-10-01

    We have prevously developed a new rational method for superposing molecules in terms of submolecular physical and chemical properties, but not in terms of atom positions or chemical structures as has been done in the conventional methods. The program was originally developed for interactive use on a three-dimensional graphic display, providing goodness-of-fit indices on molecular shape, hydrogen bonds, electrostatic interactions and others. Here, we report a new unbiased searching method for the best superposition of molecules, covering all the superposing modes and conformational freedom, as an additional function of the program. The function is based on a novel least-squares method which superposes the expected positions and orientations of hydrogen bonding partners in the receptor that are deduced from both molecules. The method not only gives reliability and reproducibility to the result of the superposition, but also allows us to save labor and time. It is demonstrated that this method is very efficient for finding the correct superposing mode in such systems where hydrogen bonds play important roles.

  4. The denoising of Monte Carlo dose distributions using convolution superposition calculations

    International Nuclear Information System (INIS)

    El Naqa, I; Cui, J; Lindsay, P; Olivera, G; Deasy, J O

    2007-01-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction. (note)

  5. NOTE: The denoising of Monte Carlo dose distributions using convolution superposition calculations

    Science.gov (United States)

    El Naqa, I.; Cui, J.; Lindsay, P.; Olivera, G.; Deasy, J. O.

    2007-09-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction.

  6. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    Energy Technology Data Exchange (ETDEWEB)

    Kish, Laszlo B. [Texas A and M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128 (United States)], E-mail: laszlo.kish@ece.tamu.edu

    2009-03-02

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case (N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart.

  7. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    International Nuclear Information System (INIS)

    Kish, Laszlo B.

    2009-01-01

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case (N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart

  8. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    Science.gov (United States)

    Kish, Laszlo B.

    2009-03-01

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case ( N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart.

  9. JaSTA-2: Second version of the Java Superposition T-matrix Application

    Science.gov (United States)

    Halder, Prithish; Das, Himadri Sekhar

    2017-12-01

    In this article, we announce the development of a new version of the Java Superposition T-matrix App (JaSTA-2), to study the light scattering properties of porous aggregate particles. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precision superposition T-matrix codes for multi-sphere clusters in random orientation, developed by Mackowski and Mischenko (1996). The new version consists of two options as part of the input parameters: (i) single wavelength and (ii) multiple wavelengths. The first option (which retains the applicability of older version of JaSTA) calculates the light scattering properties of aggregates of spheres for a single wavelength at a given instant of time whereas the second option can execute the code for a multiple numbers of wavelengths in a single run. JaSTA-2 provides convenient and quicker data analysis which can be used in diverse fields like Planetary Science, Atmospheric Physics, Nanoscience, etc. This version of the software is developed for Linux platform only, and it can be operated over all the cores of a processor using the multi-threading option.

  10. Evaluation of the Use of Home Blood Pressure Measurement Using Mobile Phone-Assisted Technology: The iVitality Proof-of-Principle Study.

    Science.gov (United States)

    Wijsman, Liselotte W; Richard, Edo; Cachucho, Ricardo; de Craen, Anton Jm; Jongstra, Susan; Mooijaart, Simon P

    2016-06-13

    Mobile phone-assisted technologies provide the opportunity to optimize the feasibility of long-term blood pressure (BP) monitoring at home, with the potential of large-scale data collection. In this proof-of-principle study, we evaluated the feasibility of home BP monitoring using mobile phone-assisted technology, by investigating (1) the association between study center and home BP measurements; (2) adherence to reminders on the mobile phone to perform home BP measurements; and (3) referrals, treatment consequences and BP reduction after a raised home BP was diagnosed. We used iVitality, a research platform that comprises a Website, a mobile phone-based app, and health sensors, to measure BP and several other health characteristics during a 6-month period. BP was measured twice at baseline at the study center. Home BP was measured on 4 days during the first week, and thereafter, at semimonthly or monthly intervals, for which participants received reminders on their mobile phone. In the monthly protocol, measurements were performed during 2 consecutive days. In the semimonthly protocol, BP was measured at 1 day. We included 151 participants (mean age [standard deviation] 57.3 [5.3] years). BP measured at the study center was systematically higher when compared with home BP measurements (mean difference systolic BP [standard error] 8.72 [1.08] and diastolic BP 5.81 [0.68] mm Hg, respectively). Correlation of study center and home measurements of BP was high (R=0.72 for systolic BP and 0.72 for diastolic BP, both PMobile phone-assisted technology is a reliable and promising method with good adherence to measure BP at home during a 6-month period. This provides a possibility for implementation in large-scale studies and can potentially contribute to BP reduction.

  11. Design principles for the development of measurement systems for research and development processes (awarded with RADMA PRIZE 1997)

    NARCIS (Netherlands)

    Kerssens-van Drongelen, I.C.; Cooke, Andrew

    1997-01-01

    Based on a comprehensive literature review and the activities of numerous case study companies, it is argued in this paper that performance measurement in R&D is a fundamental aspect to quality in R&D and to overall business performance. However, it is apparent from the case companies that many

  12. Approach to first principles model prediction of measured WIPP [Waste Isolation Pilot Plant] in situ room closure in salt

    International Nuclear Information System (INIS)

    Munson, D.E.; Fossum, A.F.; Senseny, P.E.

    1989-01-01

    The discrepancies between predicted and measured WIPP in situ Room D closures are markedly reduced through the use of a Tresca flow potential, an improved small strain constitutive model, an improved set of material parameters, and a modified stratigraphy. 17 refs., 8 figs., 1 tab

  13. Fundamental Safety Principles

    International Nuclear Information System (INIS)

    Abdelmalik, W.E.Y.

    2011-01-01

    This work presents a summary of the IAEA Safety Standards Series publication No. SF-1 entitled F UDAMENTAL Safety PRINCIPLES p ublished on 2006. This publication states the fundamental safety objective and ten associated safety principles, and briefly describes their intent and purposes. Safety measures and security measures have in common the aim of protecting human life and health and the environment. These safety principles are: 1) Responsibility for safety, 2) Role of the government, 3) Leadership and management for safety, 4) Justification of facilities and activities, 5) Optimization of protection, 6) Limitation of risks to individuals, 7) Protection of present and future generations, 8) Prevention of accidents, 9)Emergency preparedness and response and 10) Protective action to reduce existing or unregulated radiation risks. The safety principles concern the security of facilities and activities to the extent that they apply to measures that contribute to both safety and security. Safety measures and security measures must be designed and implemented in an integrated manner so that security measures do not compromise safety and safety measures do not compromise security.

  14. Correlation between mean transverse momentum and charged particle multiplicity based on geometrical superposition of p-Pb collisions

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jerome [Institut fuer Kernphysik, Goethe-Universitaet Frankfurt (Germany); Collaboration: ALICE-Collaboration

    2015-07-01

    The mean transverse momentum left angle p{sub T} right angle as a function of the charged-particle multiplicity N{sub ch} in pp, p-Pb and Pb-Pb collisions was recently published by ALICE. While in pp and in p-Pb collisions a strong increase of left angle p{sub T} right angle with N{sub ch} is observed, Pb-Pb collisions show a saturation at a much lower left angle p{sub T} right angle. Efforts of reproducing this behaviour in Pb-Pb with a superpositon of nucleon-nucleon interactions do not succeed. A superposition of p-Pb collisions seems to be more promising, since the p-Pb data shows characteristics of both pp and Pb-Pb collisions. The geometric distribution of the p-Pb impact parameters is based on the Woods-Saxon density distribution. Using the correlation of the impact parameter and the multiplicity N{sub ch} in p-Pb collisions a multiplicity-spectrum was generated. Combining this spectrum with experimental p-Pb data we present left angle p{sub T} right angle as a function of N{sub ch} in simulated Pb-Pb collisions and compare it to the correlation measured in Pb-Pb by ALICE.

  15. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle

    Science.gov (United States)

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  16. Functional magnetic resonance imaging of the truncus pulmonalis. Principles of magnetic resonance flux measurements for pulmonal hypertension diagnostics

    International Nuclear Information System (INIS)

    Abolmaali, N.

    2006-01-01

    This book gives a detailed introduction into the use of magnetic resonance flux measurements for the examination of pulmonal circulation. It presents the results of phantom experiments and evaluates and verifies sequence techniques optimised for the examination of the pulmonary circulation. This is followed by a description of an elegant experimental design for the quantification of pulmonal hypertension which is unique in its kind. The model can predict the consequences of acute, resistance-related pulmonal hypertension in a reproducible and reversible manner. It thus provides a means of evaluating pulmonal applications of magnetic resonance imaging. The idea for these studies and its implementation are an outstanding example of teamwork and interdisciplinary cooperation. Applying the results to the patient after the statistical analysis is only a small step. The book presents the results of extensive normal value studies which will make it possible to use the measurement technology in paediatric cardiology. Its range of application also includes congenital heart defects, especially ventricular septal defects and primary as well as secondary forms of pulmonal hypertension. It is not only suitable for primary diagnostics but also for post-treatment follow-up and assessment of patients' progress

  17. Electrical Impedance Spectroscopy for Quality Assessment of Meat and Fish: A Review on Basic Principles, Measurement Methods, and Recent Advances

    Directory of Open Access Journals (Sweden)

    Xin Zhao

    2017-01-01

    Full Text Available Electrical impedance spectroscopy (EIS, as an effective analytical technique for electrochemical system, has shown a wide application for food quality and safety assessment recently. Individual differences of livestock cause high variation in quality of raw meat and fish and their commercialized products. Therefore, in order to obtain the definite quality information and ensure the quality of each product, a fast and on-line detection technology is demanded to be developed to monitor product processing. EIS has advantages of being fast, nondestructive, inexpensive, and easily implemented and shows potential to develop on-line detecting instrument to replace traditional methods to realize time, cost, skilled persons saving and further quality grading. This review outlines the fundamental theories and two common measurement methods of EIS applied to biological tissue, summarizes its application specifically for quality assessment of meat and fish, and discusses challenges and future trends of EIS technology applied for meat and fish quality assessment.

  18. Zymography Principles.

    Science.gov (United States)

    Wilkesman, Jeff; Kurz, Liliana

    2017-01-01

    Zymography, the detection, identification, and even quantification of enzyme activity fractionated by gel electrophoresis, has received increasing attention in the last years, as revealed by the number of articles published. A number of enzymes are routinely detected by zymography, especially with clinical interest. This introductory chapter reviews the major principles behind zymography. New advances of this method are basically focused towards two-dimensional zymography and transfer zymography as will be explained in the rest of the chapters. Some general considerations when performing the experiments are outlined as well as the major troubleshooting and safety issues necessary for correct development of the electrophoresis.

  19. Some kinematics and dynamics from a superposition of two axisymmetric stellar systems

    International Nuclear Information System (INIS)

    Cubarsi i Morera, R.

    1990-01-01

    Some kinematic and dynamic implications of a superposition of two stellar systems are studied. In the general case of a stellar system in nonsteady states, Chandrasekhar's axially symmetrical model has been adopted for each one of the subsystems. The solution obtained for the potential function provides some kinematical constraints between the subsystems. These relationships are derived using the partial centered moments of the velocity distribution and the subcentroid velocities in order to study the velocity distribution. These relationships are used to prove that, only in a stellar system where the potential function is assumed to be stationary, the relative movement of the local subcentroids (not only in rotation), the vertex deviation phenomenon, and the whole set of the second-order-centered moments may be explained. A qualitative verification with three stellar samples in the solar neighborhood is carried out. 41 refs

  20. Enhancing quantum entanglement for continuous variables by a coherent superposition of photon subtraction and addition

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Su-Yong; Kim, Ho-Joon [Department of Physics, Texas A and M University at Qatar, P.O. Box 23874, Doha (Qatar); Ji, Se-Wan [School of Computational Sciences, Korea Institute for Advanced Study, Seoul 130-012 (Korea, Republic of); Nha, Hyunchul [Department of Physics, Texas A and M University at Qatar, P.O. Box 23874, Doha (Qatar); Institute fuer Quantenphysik, Universitaet Ulm, D-89069 Ulm (Germany)

    2011-07-15

    We investigate how the entanglement properties of a two-mode state can be improved by performing a coherent superposition operation ta+ra{sup {dagger}} of photon subtraction and addition, proposed by Lee and Nha [Phys. Rev. A 82, 053812 (2010)], on each mode. We show that the degree of entanglement, the Einstein-Podolsky-Rosen-type correlation, and the performance of quantum teleportation can be all enhanced for the output state when the coherent operation is applied to a two-mode squeezed state. The effects of the coherent operation are more prominent than those of the mere photon subtraction a and the addition a{sup {dagger}} particularly in the small-squeezing regime, whereas the optimal operation becomes the photon subtraction (case of r=0) in the large-squeezing regime.

  1. Final Aperture Superposition Technique applied to fast calculation of electron output factors and depth dose curves

    International Nuclear Information System (INIS)

    Faddegon, B.A.; Villarreal-Barajas, J.E.

    2005-01-01

    The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for a particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm 2 inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm 3 voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a maximum

  2. EPR, optical and superposition model study of Mn2+ doped L+ glutamic acid

    Science.gov (United States)

    Kripal, Ram; Singh, Manju

    2015-12-01

    Electron paramagnetic resonance (EPR) study of Mn2+ doped L+ glutamic acid single crystal is done at room temperature. Four interstitial sites are observed and the spin Hamiltonian parameters are calculated with the help of large number of resonant lines for various angular positions of external magnetic field. The optical absorption study is also done at room temperature. The energy values for different orbital levels are calculated, and observed bands are assigned as transitions from 6A1g(s) ground state to various excited states. With the help of these assigned bands, Racah inter-electronic repulsion parameters B = 869 cm-1, C = 2080 cm-1 and cubic crystal field splitting parameter Dq = 730 cm-1 are calculated. Zero field splitting (ZFS) parameters D and E are calculated by the perturbation formulae and crystal field parameters obtained using superposition model. The calculated values of ZFS parameters are in good agreement with the experimental values obtained by EPR.

  3. Superposition of two optical vortices with opposite integer or non-integer orbital angular momentum

    Directory of Open Access Journals (Sweden)

    Carlos Fernando Díaz Meza

    2016-01-01

    Full Text Available This work develops a brief proposal to achieve the superposition of two opposite vortex beams, both with integer or non-integer mean value of the orbital angular momentum. The first part is about the generation of this kind of spatial light distributions through a modified Brown and Lohmann’s hologram. The inclusion of a simple mathematical expression into the pixelated grid’s transmittance function, based in Fourier domain properties, shifts the diffraction orders counterclockwise and clockwise to the same point and allows the addition of different modes. The strategy is theoretically and experimentally validated for the case of two opposite rotation helical wavefronts.

  4. Proportional fair scheduling with superposition coding in a cellular cooperative relay system

    DEFF Research Database (Denmark)

    Kaneko, Megumi; Hayashi, Kazunori; Popovski, Petar

    2013-01-01

    Many works have tackled on the problem of throughput and fairness optimization in cellular cooperative relaying systems. Considering firstly a two-user relay broadcast channel, we design a scheme based on superposition coding (SC) which maximizes the achievable sum-rate under a proportional...... fairness constraint. Unlike most relaying schemes where users are allocated orthogonally, our scheme serves the two users simultaneously on the same time-frequency resource unit by superposing their messages into three SC layers. The optimal power allocation parameters of each SC layer are derived...... by analysis. Next, we consider the general multi-user case in a cellular relay system, for which we design resource allocation algorithms based on proportional fair scheduling exploiting the proposed SC-based scheme. Numerical results show that the proposed algorithms allowing simultaneous user allocation...

  5. Digital coherent superposition of optical OFDM subcarrier pairs with Hermitian symmetry for phase noise mitigation.

    Science.gov (United States)

    Yi, Xingwen; Chen, Xuemei; Sharma, Dinesh; Li, Chao; Luo, Ming; Yang, Qi; Li, Zhaohui; Qiu, Kun

    2014-06-02

    Digital coherent superposition (DCS) provides an approach to combat fiber nonlinearities by trading off the spectrum efficiency. In analogy, we extend the concept of DCS to the optical OFDM subcarrier pairs with Hermitian symmetry to combat the linear and nonlinear phase noise. At the transmitter, we simply use a real-valued OFDM signal to drive a Mach-Zehnder (MZ) intensity modulator biased at the null point and the so-generated OFDM signal is Hermitian in the frequency domain. At receiver, after the conventional OFDM signal processing, we conduct DCS of the optical OFDM subcarrier pairs, which requires only conjugation and summation. We show that the inter-carrier-interference (ICI) due to phase noise can be reduced because of the Hermitain symmetry. In a simulation, this method improves the tolerance to the laser phase noise. In a nonlinear WDM transmission experiment, this method also achieves better performance under the influence of cross phase modulation (XPM).

  6. Performance Analysis of Diversity-Controlled Multi-User Superposition Transmission for 5G Wireless Networks.

    Science.gov (United States)

    Yeom, Jeong Seon; Chu, Eunmi; Jung, Bang Chul; Jin, Hu

    2018-02-10

    In this paper, we propose a novel low-complexity multi-user superposition transmission (MUST) technique for 5G downlink networks, which allows multiple cell-edge users to be multiplexed with a single cell-center user. We call the proposed technique diversity-controlled MUST technique since the cell-center user enjoys the frequency diversity effect via signal repetition over multiple orthogonal frequency division multiplexing (OFDM) sub-carriers. We assume that a base station is equipped with a single antenna but users are equipped with multiple antennas. In addition, we assume that the quadrature phase shift keying (QPSK) modulation is used for users. We mathematically analyze the bit error rate (BER) of both cell-edge users and cell-center users, which is the first theoretical result in the literature to the best of our knowledge. The mathematical analysis is validated through extensive link-level simulations.

  7. Strong-field effects in Rabi oscillations between a single state and a superposition of states

    International Nuclear Information System (INIS)

    Zhdanovich, S.; Milner, V.; Hepburn, J. W.

    2011-01-01

    Rabi oscillations of quantum population are known to occur in two-level systems driven by spectrally narrow laser fields. In this work we study Rabi oscillations induced by shaped broadband femtosecond laser pulses. Due to the broad spectral width of the driving field, the oscillations are initiated between a ground state and a coherent superposition of excited states, or a ''wave packet,'' rather than a single excited state. Our experiments reveal an intricate dependence of the wave-packet phase on the intensity of the laser field. We confirm numerically that the effect is associated with the strong-field nature of the interaction and provide a qualitative picture by invoking a simple theoretical model.

  8. Quantum tele-amplification with a continuous-variable superposition state

    DEFF Research Database (Denmark)

    Neergaard-Nielsen, Jonas S.; Eto, Yujiro; Lee, Chang-Woo

    2013-01-01

    -enhanced functions such as coherent-state quantum computing (CSQC), quantum metrology and a quantum repeater could be realized in the networks. Optical cat states are now routinely generated in laboratories. An important next challenge is to use them for implementing the aforementioned functions. Here, we......Optical coherent states are classical light fields with high purity, and are essential carriers of information in optical networks. If these states could be controlled in the quantum regime, allowing for their quantum superposition (referred to as a Schrödinger-cat state), then novel quantum...... demonstrate a basic CSQC protocol, where a cat state is used as an entanglement resource for teleporting a coherent state with an amplitude gain. We also show how this can be extended to a loss-tolerant quantum relay of multi-ary phase-shift keyed coherent states. These protocols could be useful in both...

  9. Application of Fermat's Principle to Calculation of the Errors of Acoustic Flow-Rate Measurements for a Three-Dimensional Fluid Flow or Gas

    Science.gov (United States)

    Petrov, A. G.; Shkundin, S. Z.

    2018-01-01

    Fermat's variational principle is used for derivation of the formula for the time of propagation of a sonic signal between two set points A and B in a steady three-dimensional flow of a fluid or gas. It is shown that the fluid flow changes the time of signal reception by a value proportional to the flow rate independently of the velocity profile. The time difference in the reception of the signals from point B to point A and vice versa is proportional with a high accuracy to the flow rate. It is shown that the relative error of the formula does not exceed the square of the largest Mach number. This makes it possible to measure the flow rate of a fluid or gas with an arbitrary steady subsonic velocity field.

  10. The Features of Moessbauer Spectra of Hemoglobins: Approximation by Superposition of Quadrupole Doublets or by Quadrupole Splitting Distribution?

    International Nuclear Information System (INIS)

    Oshtrakh, M. I.; Semionkin, V. A.

    2004-01-01

    Moessbauer spectra of hemoglobins have some features in the range of liquid nitrogen temperature: a non-Lorentzian asymmetric line shape for oxyhemoglobins and symmetric Lorentzian line shape for deoxyhemoglobins. A comparison of the approximation of the hemoglobin Moessbauer spectra by a superposition of two quadrupole doublets and by a distribution of the quadrupole splitting demonstrates that a superposition of two quadrupole doublets is more reliable and may reflect the non-equivalent iron electronic structure and the stereochemistry in the α- and β-subunits of hemoglobin tetramers.

  11. Classification of high-resolution remote sensing images based on multi-scale superposition

    Science.gov (United States)

    Wang, Jinliang; Gao, Wenjie; Liu, Guangjie

    2017-07-01

    Landscape structures and process on different scale show different characteristics. In the study of specific target landmarks, the most appropriate scale for images can be attained by scale conversion, which improves the accuracy and efficiency of feature identification and classification. In this paper, the authors carried out experiments on multi-scale classification by taking the Shangri-la area in the north-western Yunnan province as the research area and the images from SPOT5 HRG and GF-1 Satellite as date sources. Firstly, the authors upscaled the two images by cubic convolution, and calculated the optimal scale for different objects on the earth shown in images by variation functions. Then the authors conducted multi-scale superposition classification on it by Maximum Likelyhood, and evaluated the classification accuracy. The results indicates that: (1) for most of the object on the earth, the optimal scale appears in the bigger scale instead of the original one. To be specific, water has the biggest optimal scale, i.e. around 25-30m; farmland, grassland, brushwood, roads, settlement places and woodland follows with 20-24m. The optimal scale for shades and flood land is basically as the same as the original one, i.e. 8m and 10m respectively. (2) Regarding the classification of the multi-scale superposed images, the overall accuracy of the ones from SPOT5 HRG and GF-1 Satellite is 12.84% and 14.76% higher than that of the original multi-spectral images, respectively, and Kappa coefficient is 0.1306 and 0.1419 higher, respectively. Hence, the multi-scale superposition classification which was applied in the research area can enhance the classification accuracy of remote sensing images .

  12. Itch Management: General Principles.

    Science.gov (United States)

    Misery, Laurent

    2016-01-01

    Like pain, itch is a challenging condition that needs to be managed. Within this setting, the first principle of itch management is to get an appropriate diagnosis to perform an etiology-oriented therapy. In several cases it is not possible to treat the cause, the etiology is undetermined, there are several causes, or the etiological treatment is not effective enough to alleviate itch completely. This is also why there is need for symptomatic treatment. In all patients, psychological support and associated pragmatic measures might be helpful. General principles and guidelines are required, yet patient-centered individual care remains fundamental. © 2016 S. Karger AG, Basel.

  13. Green function as an integral superposition of Gaussian beams in inhomogeneous anisotropic layered structures in Cartesian coordinates

    Czech Academy of Sciences Publication Activity Database

    Červený, V.; Pšenčík, Ivan

    2016-01-01

    Roč. 26 (2016), s. 131-153 ISSN 2336-3827 R&D Projects: GA ČR(CZ) GA16-05237S Institutional support: RVO:67985530 Keywords : elastodynamic Green function * inhomogeneous anisotropic media * integral superposition of Gaussian beams Subject RIV: DC - Siesmology, Volcanology, Earth Structure

  14. Measurable Changes in Pre-Post Test Scores in Iraqi 4-H Leader’s Knowledge of Animal Science Production Principles

    Directory of Open Access Journals (Sweden)

    Justen O. Smith

    2015-06-01

    Full Text Available The 4-H volunteer program is a new concept to the people of Iraq, for decades the country has been closed to western ideas. Iraqi culture and the Arabic customs have not embraced the volunteer concept and even more the concept of scientific animal production technologies designed to increase profitability for producers. In 2011 the USAID-Inma Agribusiness program teamed with the Iraq 4-H program to create youth and community entrepreneurship opportunities for widowed families. Iraq 4-H provided the youth members and adult volunteers and Inma provided the financial capital (livestock and the animal science training program for the volunteers. The purpose of this study was to measure the knowledge level gained through intensive animal science training for Iraqi 4-H volunteers. Researchers designed and implemented a pre and post test to measure the knowledge of fifteen volunteers who participated in the three day course. The pretest exposed a general lack of animal science knowledge of all volunteers; over 80% of the participants incorrectly answered the questions. However, the post-test indicated positive change in the participants understanding of animal science production principles.

  15. Experimental observation of constructive superposition of wakefields generated by electron bunches in a dielectric-lined waveguide

    Directory of Open Access Journals (Sweden)

    S. V. Shchelkunov

    2006-01-01

    Full Text Available We report results from an experiment that demonstrates the successful superposition of wakefields excited by 50 MeV bunches which travel ∼50  cm along the axis of a cylindrical waveguide which is lined with alumina. The bunches are prepared by splitting a single laser pulse prior to focusing it onto the cathode of an rf gun into two pulses and inserting an optical delay in the path of one of them. Wakefields from two short (5–6 psec 0.15–0.35 nC bunches are superimposed and the energy loss of each bunch is measured as the separation between the bunches is varied so as to encompass approximately one wakefield period (∼21   cm. A spectrum of ∼40   TM_{0m} eigenmodes is excited by the bunch. A substantial retarding wakefield (2.65   MV/m·nC for just the first bunch is developed because of the short bunch length and the narrow vacuum channel diameter (3 mm through which they move. The energy loss of the second bunch exhibits a narrow peak when the bunch spacing is varied by only 4 mm (13.5 psec. This experiment is compared with a related experiment reported by a group at the Argonne National Laboratory where the bunch spacing was not varied and a much weaker retarding wakefield (∼0.1   MV/m·nC for the first bunch comprising only about 10 eigenmodes was excited by a train of long (∼9   mm bunches.

  16. Monte Carlo evaluation of the convolution/superposition algorithm of Hi-Art tomotherapy in heterogeneous phantoms and clinical cases

    International Nuclear Information System (INIS)

    Sterpin, E.; Salvat, F.; Olivera, G.; Vynckier, S.

    2009-01-01

    The reliability of the convolution/superposition (C/S) algorithm of the Hi-Art tomotherapy system is evaluated by using the Monte Carlo model TomoPen, which has been already validated for homogeneous phantoms. The study was performed in three stages. First, measurements with EBT Gafchromic film for a 1.25x2.5 cm 2 field in a heterogeneous phantom consisting of two slabs of polystyrene separated with Styrofoam were compared to simulation results from TomoPen. The excellent agreement found in this comparison justifies the use of TomoPen as the reference for the remaining parts of this work. Second, to allow analysis and interpretation of the results in clinical cases, dose distributions calculated with TomoPen and C/S were compared for a similar phantom geometry, with multiple slabs of various densities. Even in conditions of lack of lateral electronic equilibrium, overall good agreement was obtained between C/S and TomoPen results, with deviations within 3%/2 mm, showing that the C/S algorithm accounts for modifications in secondary electron transport due to the presence of a low density medium. Finally, calculations were performed with TomoPen and C/S of dose distributions in various clinical cases, from large bilateral head and neck tumors to small lung tumors with diameter of <3 cm. To ensure a ''fair'' comparison, identical dose calculation grid and dose-volume histogram calculator were used. Very good agreement was obtained for most of the cases, with no significant differences between the DVHs obtained from both calculations. However, deviations of up to 4% for the dose received by 95% of the target volume were found for the small lung tumors. Therefore, the approximations in the C/S algorithm slightly influence the accuracy in small lung tumors even though the C/S algorithm of the tomotherapy system shows very good overall behavior.

  17. Monte Carlo evaluation of the convolution/superposition algorithm of Hi-Art tomotherapy in heterogeneous phantoms and clinical cases

    Energy Technology Data Exchange (ETDEWEB)

    Sterpin, E.; Salvat, F.; Olivera, G.; Vynckier, S. [Department of Radiotherapy, Saint-Luc University Hospital, Universite Catholique de Louvain, 10 Avenue Hippocrate, 1200 Brussels (Belgium); Facultat de Fisica (ECM), Universitat de Barcelona, Diagonal 647, 08028 Barcelona (Spain); Tomotherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 and Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Department of Radiotherapy, Saint-Luc University Hospital, Universite Catholique de Louvain, 10 Avenue Hippocrate, 1200 Brussels (Belgium)

    2009-05-15

    The reliability of the convolution/superposition (C/S) algorithm of the Hi-Art tomotherapy system is evaluated by using the Monte Carlo model TomoPen, which has been already validated for homogeneous phantoms. The study was performed in three stages. First, measurements with EBT Gafchromic film for a 1.25x2.5 cm{sup 2} field in a heterogeneous phantom consisting of two slabs of polystyrene separated with Styrofoam were compared to simulation results from TomoPen. The excellent agreement found in this comparison justifies the use of TomoPen as the reference for the remaining parts of this work. Second, to allow analysis and interpretation of the results in clinical cases, dose distributions calculated with TomoPen and C/S were compared for a similar phantom geometry, with multiple slabs of various densities. Even in conditions of lack of lateral electronic equilibrium, overall good agreement was obtained between C/S and TomoPen results, with deviations within 3%/2 mm, showing that the C/S algorithm accounts for modifications in secondary electron transport due to the presence of a low density medium. Finally, calculations were performed with TomoPen and C/S of dose distributions in various clinical cases, from large bilateral head and neck tumors to small lung tumors with diameter of <3 cm. To ensure a ''fair'' comparison, identical dose calculation grid and dose-volume histogram calculator were used. Very good agreement was obtained for most of the cases, with no significant differences between the DVHs obtained from both calculations. However, deviations of up to 4% for the dose received by 95% of the target volume were found for the small lung tumors. Therefore, the approximations in the C/S algorithm slightly influence the accuracy in small lung tumors even though the C/S algorithm of the tomotherapy system shows very good overall behavior.

  18. Application of the similitude principle to gamma-gamma density measurements; Application du principe de similitude a la mesure gamma-gamma de densite

    Energy Technology Data Exchange (ETDEWEB)

    Czubek, J A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires. Departement d' Electronique Generale, Service d' Electronique Industrielle; Institut de Recherches Nucleaires, Dep. VI, Cracow (Poland)

    1966-07-01

    The work presented here deals with the problem of the application of the similitude principle to rock density measurements by the gamma-gamma method. A formula is presented which makes it possible to transform results of gamma-gamma measurements carried out on models in order to make them suitable for comparison with results obtained under actual field conditions. Both the space coordinates and the densities are transformed. This transformation makes it possible to obtain a calibration curve as a function of the density for a gamma-gamma probe using only a single model of given density. The influence has also been studied of the chemical composition on the results obtained from gamma-gamma measurements. A method has been developed for estimating the equivalent Z parameter of the medium; the possibility of completely eliminating the influence of the chemical composition of the medium on the measurement results has been studied. (author) [French] L'etude presentee ci-dessous traite le probleme de l'application du principe de similitude aux mesures de densite des roches par la methode gamma-gamma. Nous indiquons une formule qui permet de transformer les resultats des mesures gamma-gamma effectuees sur les modeles pour les comparer aux resultats obtenus dans les conditions reelles sur le terrain. On transforme les coordonnees spatiales ainsi que les densites. Cette transformation donne la possibilite d'obtenir une courbe d'etalonnage (en fonction de la densite) pour une sonde gamma-gamma en utilisant un seul modele de densite donnee. On a etudie aussi l'influence de la composition chimique sur les resultats obtenus des mesures gamma-gamma. On a etabli une methode d'estimation du parametre Z equivalent du milieu, ainsi que la possibilite d'eliminer completement l'influence de la composition chimique du milieu sur les resultats des mesures de densite. (auteur)

  19. Noise-based logic hyperspace with the superposition of 2{sup N} states in a single wire

    Energy Technology Data Exchange (ETDEWEB)

    Kish, Laszlo B. [Texas A and M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128 (United States)], E-mail: laszlo.kish@ece.tamu.edu; Khatri, Sunil; Sethuraman, Swaminathan [Texas A and M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128 (United States)

    2009-05-11

    In the introductory paper [L.B. Kish, Phys. Lett. A 373 (2009) 911], about noise-based logic, we showed how simple superpositions of single logic basis vectors can be achieved in a single wire. The superposition components were the N orthogonal logic basis vectors. Supposing that the different logic values have 'on/off' states only, the resultant discrete superposition state represents a single number with N bit accuracy in a single wire, where N is the number of orthogonal logic vectors in the base. In the present Letter, we show that the logic hyperspace (product) vectors defined in the introductory paper can be generalized to provide the discrete superposition of 2{sup N} orthogonal system states. This is equivalent to a multi-valued logic system with 2{sup 2{sup N}} logic values per wire. This is a similar situation to quantum informatics with N qubits, and hence we introduce the notion of noise-bit. This system has major differences compared to quantum informatics. The noise-based logic system is deterministic and each superposition element is instantly accessible with the high digital accuracy, via a real hardware parallelism, without decoherence and error correction, and without the requirement of repeating the logic operation many times to extract the probabilistic information. Moreover, the states in noise-based logic do not have to be normalized, and non-unitary operations can also be used. As an example, we introduce a string search algorithm which is O({radical}(M)) times faster than Grover's quantum algorithm (where M is the number of string entries), while it has the same hardware complexity class as the quantum algorithm.

  20. Variational principle for the Pareto power law.

    Science.gov (United States)

    Chakraborti, Anirban; Patriarca, Marco

    2009-11-27

    A mechanism is proposed for the appearance of power-law distributions in various complex systems. It is shown that in a conservative mechanical system composed of subsystems with different numbers of degrees of freedom a robust power-law tail can appear in the equilibrium distribution of energy as a result of certain superpositions of the canonical equilibrium energy densities of the subsystems. The derivation only uses a variational principle based on the Boltzmann entropy, without assumptions outside the framework of canonical equilibrium statistical mechanics. Two examples are discussed, free diffusion on a complex network and a kinetic model of wealth exchange. The mechanism is illustrated in the general case through an exactly solvable mechanical model of a dimensionally heterogeneous system.

  1. Heuristic Relative Entropy Principles with Complex Measures: Large-Degree Asymptotics of a Family of Multi-variate Normal Random Polynomials

    Science.gov (United States)

    Kiessling, Michael Karl-Heinz

    2017-10-01

    Let z\\in C, let σ ^2>0 be a variance, and for N\\in N define the integrals E_N^{}(z;σ ) := {1/σ } \\int _R\\ (x^2+z^2) e^{-{1/2σ^2 x^2}}{√{2π }}/dx \\quad if N=1, {1/σ } \\int _{R^N} \\prod \\prod \\limits _{1≤ k1. These are expected values of the polynomials P_N^{}(z)=\\prod _{1≤ n≤ N}(X_n^2+z^2) whose 2 N zeros ± i X_k^{}_{k=1,\\ldots ,N} are generated by N identically distributed multi-variate mean-zero normal random variables {X_k}N_{k=1} with co-variance {Cov}_N^{}(X_k,X_l)=(1+σ ^2-1/N)δ _{k,l}+σ ^2-1/N(1-δ _{k,l}). The E_N^{}(z;σ ) are polynomials in z^2, explicitly computable for arbitrary N, yet a list of the first three E_N^{}(z;σ ) shows that the expressions become unwieldy already for moderate N—unless σ = 1, in which case E_N^{}(z;1) = (1+z^2)^N for all z\\in C and N\\in N. (Incidentally, commonly available computer algebra evaluates the integrals E_N^{}(z;σ ) only for N up to a dozen, due to memory constraints). Asymptotic evaluations are needed for the large- N regime. For general complex z these have traditionally been limited to analytic expansion techniques; several rigorous results are proved for complex z near 0. Yet if z\\in R one can also compute this "infinite-degree" limit with the help of the familiar relative entropy principle for probability measures; a rigorous proof of this fact is supplied. Computer algebra-generated evidence is presented in support of a conjecture that a generalization of the relative entropy principle to signed or complex measures governs the N→ ∞ asymptotics of the regime iz\\in R. Potential generalizations, in particular to point vortex ensembles and the prescribed Gauss curvature problem, and to random matrix ensembles, are emphasized.

  2. Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings

    Science.gov (United States)

    Krenn, Mario; Gu, Xuemei; Zeilinger, Anton

    2017-12-01

    We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the #P-complete complexity class, thus it cannot be done efficiently. To strengthen the link further, theorems from graph theory—such as Hall's marriage problem—are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows one to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and how to potentially simulate properties of graphs and networks with quantum experiments (such as critical exponents and phase transitions).

  3. A Bethe ansatz solvable model for superpositions of Cooper pairs and condensed molecular bosons

    International Nuclear Information System (INIS)

    Hibberd, K.E.; Dunning, C.; Links, J.

    2006-01-01

    We introduce a general Hamiltonian describing coherent superpositions of Cooper pairs and condensed molecular bosons. For particular choices of the coupling parameters, the model is integrable. One integrable manifold, as well as the Bethe ansatz solution, was found by Dukelsky et al. [J. Dukelsky, G.G. Dussel, C. Esebbag, S. Pittel, Phys. Rev. Lett. 93 (2004) 050403]. Here we show that there is a second integrable manifold, established using the boundary quantum inverse scattering method. In this manner we obtain the exact solution by means of the algebraic Bethe ansatz. In the case where the Cooper pair energies are degenerate we examine the relationship between the spectrum of these integrable Hamiltonians and the quasi-exactly solvable spectrum of particular Schrodinger operators. For the solution we derive here the potential of the Schrodinger operator is given in terms of hyperbolic functions. For the solution derived by Dukelsky et al., loc. cit. the potential is sextic and the wavefunctions obey PT-symmetric boundary conditions. This latter case provides a novel example of an integrable Hermitian Hamiltonian acting on a Fock space whose states map into a Hilbert space of PT-symmetric wavefunctions defined on a contour in the complex plane

  4. Multimodality 3D Superposition and Automated Whole Brain Tractography: Comprehensive Printing of the Functional Brain.

    Science.gov (United States)

    Konakondla, Sanjay; Brimley, Cameron J; Sublett, Jesna Mathew; Stefanowicz, Edward; Flora, Sarah; Mongelluzzo, Gino; Schirmer, Clemens M

    2017-09-29

    Whole brain tractography using diffusion tensor imaging (DTI) sequences can be used to map cerebral connectivity; however, this can be time-consuming due to the manual component of image manipulation required, calling for the need for a standardized, automated, and accurate fiber tracking protocol with automatic whole brain tractography (AWBT). Interpreting conventional two-dimensional (2D) images, such as computed tomography (CT) and magnetic resonance imaging (MRI), as an intraoperative three-dimensional (3D) environment is a difficult task with recognized inter-operator variability. Three-dimensional printing in neurosurgery has gained significant traction in the past decade, and as software, equipment, and practices become more refined, trainee education, surgical skills, research endeavors, innovation, patient education, and outcomes via valued care is projected to improve. We describe a novel multimodality 3D superposition (MMTS) technique, which fuses multiple imaging sequences alongside cerebral tractography into one patient-specific 3D printed model. Inferences on cost and improved outcomes fueled by encouraging patient engagement are explored.

  5. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    Science.gov (United States)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  6. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    International Nuclear Information System (INIS)

    Khare, Avinash; Saxena, Avadh

    2014-01-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations

  7. Identification of distant drug off-targets by direct superposition of binding pocket surfaces.

    Science.gov (United States)

    Schumann, Marcel; Armen, Roger S

    2013-01-01

    Correctly predicting off-targets for a given molecular structure, which would have the ability to bind a large range of ligands, is both particularly difficult and important if they share no significant sequence or fold similarity with the respective molecular target ("distant off-targets"). A novel approach for identification of off-targets by direct superposition of protein binding pocket surfaces is presented and applied to a set of well-studied and highly relevant drug targets, including representative kinases and nuclear hormone receptors. The entire Protein Data Bank is searched for similar binding pockets and convincing distant off-target candidates were identified that share no significant sequence or fold similarity with the respective target structure. These putative target off-target pairs are further supported by the existence of compounds that bind strongly to both with high topological similarity, and in some cases, literature examples of individual compounds that bind to both. Also, our results clearly show that it is possible for binding pockets to exhibit a striking surface similarity, while the respective off-target shares neither significant sequence nor significant fold similarity with the respective molecular target ("distant off-target").

  8. Modeling and Simulation of Voids in Composite Tape Winding Process Based on Domain Superposition Technique

    Science.gov (United States)

    Deng, Bo; Shi, Yaoyao

    2017-11-01

    The tape winding technology is an effective way to fabricate rotationally composite materials. Nevertheless, some inevitable defects will seriously influence the performance of winding products. One of the crucial ways to identify the quality of fiber-reinforced composite material products is examining its void content. Significant improvement in products' mechanical properties can be achieved by minimizing the void defect. Two methods were applied in this study, finite element analysis and experimental testing, respectively, to investigate the mechanism of how void forming in composite tape winding processing. Based on the theories of interlayer intimate contact and Domain Superposition Technique (DST), a three-dimensional model of prepreg tape void with SolidWorks has been modeled in this paper. Whereafter, ABAQUS simulation software was used to simulate the void content change with pressure and temperature. Finally, a series of experiments were performed to determine the accuracy of the model-based predictions. The results showed that the model is effective for predicting the void content in the composite tape winding process.

  9. Tectonic superposition of the Kurosegawa Terrane upon the Sanbagawa metamorphic belt in eastern Shikoku, southwest Japan

    International Nuclear Information System (INIS)

    Suzuki, Hisashi; Isozaki, Yukio; Itaya, Tetsumaru.

    1990-01-01

    Weakly metamorphosed pre-Cenozoic accretionary complex in the northern part of the Chichibu Belt in Kamikatsu Town, eastern Shikoku, consists of two distinct geologic units; the Northern Unit and Southern Unit. The Northern Unit is composed mainly of phyllitic pelites and basic tuff with allochthonous blocks of chert and limestone, and possesses mineral paragenesis of the glaucophane schist facies. The Southern Unit is composed mainly of phyllitic pelites with allochthonous blocks of sandstone, limestone, massive green rocks, and chert, and possesses mineral paragenesis of the pumpellyite-actinolite facies. The Southern Unit tectonically overlies the Northern Univ by the south-dipping Jiganji Fault. K-Ar ages were dated for the recrystallized white micas from 11 samples of pelites and basic tuff in the Northern Unit, and from 6 samples of pelites in the Southern Unit. The K-Ar ages of the samples from the Northern Unit range in 129-112 Ma, and those from the Southern Unit in 225-194 Ma. In terms of metamorphic ages, the Northern Unit and Southern Unit are referred to the constituents of the Sanbagawa Metamorphic Belt, and to those of the Kurosegawa Terrane, respectively. Thus, tectonic superposition of these two units in the study area suggests that the Kurosegawa Terrane occurs in a higher structural position over the Sanbagawa Metamorphic Belt in eastern Shikoku. (author)

  10. Impact response analysis of cask for spent fuel by dimensional analysis and mode superposition method

    International Nuclear Information System (INIS)

    Kim, Y. J.; Kim, W. T.; Lee, Y. S.

    2006-01-01

    Full text: Full text: Due to the potentiality of accidents, the transportation safety of radioactive material has become extremely important in these days. The most important means of accomplishing the safety in transportation for radioactive material is the integrity of cask. The cask for spent fuel consists of a cask body and two impact limiters generally. The impact limiters are attached at the upper and the lower of the cask body. The cask comprises general requirements and test requirements for normal transport conditions and hypothetical accident conditions in accordance with IAEA regulations. Among the test requirements for hypothetical accident conditions, the 9 m drop test of dropping the cask from 9 m height to unyielding surface to get maximum damage becomes very important requirement because it can affect the structural soundness of the cask. So far the impact response analysis for 9 m drop test has been obtained by finite element method with complex computational procedure. In this study, the empirical equations of the impact forces for 9 m drop test are formulated by dimensional analysis. And then using the empirical equations the characteristics of material used for impact limiters are analysed. Also the dynamic impact response of the cask body is analysed using the mode superposition method and the analysis method is proposed. The results are also validated by comparing with previous experimental results and finite element analysis results. The present method is simpler than finite element method and can be used to predict the impact response of the cask

  11. Motion Estimation Using the Single-row Superposition-type Planar Compound-like Eye

    Directory of Open Access Journals (Sweden)

    Gwo-Long Lin

    2007-06-01

    Full Text Available How can the compound eye of insects capture the prey so accurately andquickly? This interesting issue is explored from the perspective of computer vision insteadof from the viewpoint of biology. The focus is on performance evaluation of noiseimmunity for motion recovery using the single-row superposition-type planar compound-like eye (SPCE. The SPCE owns a special symmetrical framework with tremendousamount of ommatidia inspired by compound eye of insects. The noise simulates possibleambiguity of image patterns caused by either environmental uncertainty or low resolutionof CCD devices. Results of extensive simulations indicate that this special visualconfiguration provides excellent motion estimation performance regardless of themagnitude of the noise. Even when the noise interference is serious, the SPCE is able todramatically reduce errors of motion recovery of the ego-translation without any type offilters. In other words, symmetrical, regular, and multiple vision sensing devices of thecompound-like eye have statistical averaging advantage to suppress possible noises. Thisdiscovery lays the basic foundation in terms of engineering approaches for the secret of thecompound eye of insects.

  12. Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings.

    Science.gov (United States)

    Krenn, Mario; Gu, Xuemei; Zeilinger, Anton

    2017-12-15

    We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the #P-complete complexity class, thus it cannot be done efficiently. To strengthen the link further, theorems from graph theory-such as Hall's marriage problem-are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows one to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and how to potentially simulate properties of graphs and networks with quantum experiments (such as critical exponents and phase transitions).

  13. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU

    Energy Technology Data Exchange (ETDEWEB)

    Chen Quan; Chen Mingli; Lu Weiguo [TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 (United States)

    2011-03-15

    Purpose: Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). Methods: The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. Results: As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Conclusions: Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.

  14. A study of radiative properties of fractal soot aggregates using the superposition T-matrix method

    International Nuclear Information System (INIS)

    Li Liu; Mishchenko, Michael I.; Patrick Arnott, W.

    2008-01-01

    We employ the numerically exact superposition T-matrix method to perform extensive computations of scattering and absorption properties of soot aggregates with varying state of compactness and size. The fractal dimension, D f , is used to quantify the geometrical mass dispersion of the clusters. The optical properties of soot aggregates for a given fractal dimension are complex functions of the refractive index of the material m, the number of monomers N S , and the monomer radius a. It is shown that for smaller values of a, the absorption cross section tends to be relatively constant when D f f >2. However, a systematic reduction in light absorption with D f is observed for clusters with sufficiently large N S , m, and a. The scattering cross section and single-scattering albedo increase monotonically as fractals evolve from chain-like to more densely packed morphologies, which is a strong manifestation of the increasing importance of scattering interaction among spherules. Overall, the results for soot fractals differ profoundly from those calculated for the respective volume-equivalent soot spheres as well as for the respective external mixtures of soot monomers under the assumption that there are no electromagnetic interactions between the monomers. The climate-research implications of our results are discussed

  15. Testing the Underlying Chemical Principles of the Biotic Ligand Model (BLM) to Marine Copper Systems: Measuring Copper Speciation Using Fluorescence Quenching.

    Science.gov (United States)

    Tait, Tara N; McGeer, James C; Smith, D Scott

    2018-01-01

    Speciation of copper in marine systems strongly influences the ability of copper to cause toxicity. Natural organic matter (NOM) contains many binding sites which provides a protective effect on copper toxicity. The purpose of this study was to characterize copper binding with NOM using fluorescence quenching techniques. Fluorescence quenching of NOM with copper was performed on nine sea water samples. The resulting stability constants and binding capacities were consistent with literature values of marine NOM, showing strong binding with [Formula: see text] values from 7.64 to 10.2 and binding capacities ranging from 15 to 3110 nmol mg [Formula: see text] Free copper concentrations estimated at total dissolved copper concentrations corresponding to previously published rotifer effect concentrations, in the same nine samples, were statistically the same as the range of free copper calculated for the effect concentration in NOM-free artificial seawater. These data confirms the applicability of fluorescence spectroscopy techniques for NOM and copper speciation characterization in sea water and demonstrates that such measured speciation is consistent with the chemical principles underlying the biotic ligand model approach for bioavailability-based metals risk assessment.

  16. Principles of fluid mechanics

    International Nuclear Information System (INIS)

    Kreider, J.F.

    1985-01-01

    This book is an introduction on fluid mechanics incorporating computer applications. Topics covered are as follows: brief history; what is a fluid; two classes of fluids: liquids and gases; the continuum model of a fluid; methods of analyzing fluid flows; important characteristics of fluids; fundamentals and equations of motion; fluid statics; dimensional analysis and the similarity principle; laminar internal flows; ideal flow; external laminar and channel flows; turbulent flow; compressible flow; fluid flow measurements

  17. Using the Principles of F.A.I.R Data to Improve the Measure of Value of Big Data and Big Data Repositories

    Science.gov (United States)

    Richards, C. J.; Wyborn, L. A.; Evans, B. J. K.; Wang, J.; Druken, K. A.; Smillie, J.; Pringle, S.

    2017-12-01

    In a data-intensive world, finding the right data can be time-consuming and, when found, may involve compromises on quality and often considerable extra effort to wrangle it into shape. This is particularly true as users are exploring new and innovative ways of working with data from different sources and scientific domains. It is recognised that the effort and specialist knowledge required to transform datasets to meet these requirements goes beyond the reasonable remit of a single research project or research community. Instead, Government investments in national collaborations like the Australian National University's National Computational Infrastructure (NCI), provide a sustainable way to bring together and transform disparate data collections from a range of disciplines in ways which enable new and innovative analysis and use. With these goals in mind, the NCI established a Data Quality Strategy (DQS) for managing 10PB of reference data collections with a particular focus on improving data use and reuse across scientific domains, making the data suitable for use in a high-end computational and data-intensive environment, and supporting programmatic access for a range of applications. Evaluating how effectively we're achivieving these goals and maintaining ongoing funding requires demonstration of the value and impact of these data collections. Standard approaches to measuring data value involve basic measures of `data usage' or make an attempt to track data to `research outcomes'. While useful, these measures fail to capture the value of the level of curation or quality assurance in making the data available. To fill this gap, NCI has developed a 3-tiered approach to measuring the return on investment which broadens the concept of value to include improvements in access to and use of the data. Key to this approach was integrating the guiding principles of the Force 11 community's F.A.I.R data into the DQS because it provides a community-driven standards

  18. The gauge principle vs. the equivalence principle

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1984-01-01

    Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation

  19. Transient change in the shape of premixed burner flame with the superposition of pulsed dielectric barrier discharge

    Science.gov (United States)

    Zaima, Kazunori; Sasaki, Koichi

    2016-08-01

    We investigated the transient phenomena in a premixed burner flame with the superposition of a pulsed dielectric barrier discharge (DBD). The length of the flame was shortened by the superposition of DBD, indicating the activation of combustion chemical reactions with the help of the plasma. In addition, we observed the modulation of the top position of the unburned gas region and the formations of local minimums in the axial distribution of the optical emission intensity of OH. These experimental results reveal the oscillation of the rates of combustion chemical reactions as a response to the activation by pulsed DBD. The cycle of the oscillation was 0.18-0.2 ms, which could be understood as the eigenfrequency of the plasma-assisted combustion reaction system.

  20. Throughput Maximization for Cognitive Radio Networks Using Active Cooperation and Superposition Coding

    KAUST Repository

    Hamza, Doha R.

    2015-02-13

    We propose a three-message superposition coding scheme in a cognitive radio relay network exploiting active cooperation between primary and secondary users. The primary user is motivated to cooperate by substantial benefits it can reap from this access scenario. Specifically, the time resource is split into three transmission phases: The first two phases are dedicated to primary communication, while the third phase is for the secondary’s transmission. We formulate two throughput maximization problems for the secondary network subject to primary user rate constraints and per-node power constraints with respect to the time durations of primary transmission and the transmit power of the primary and the secondary users. The first throughput maximization problem assumes a partial power constraint such that the secondary power dedicated to primary cooperation, i.e. for the first two communication phases, is fixed apriori. In the second throughput maximization problem, a total power constraint is assumed over the three phases of communication. The two problems are difficult to solve analytically when the relaying channel gains are strictly greater than each other and strictly greater than the direct link channel gain. However, mathematically tractable lowerbound and upperbound solutions can be attained for the two problems. For both problems, by only using the lowerbound solution, we demonstrate significant throughput gains for both the primary and the secondary users through this active cooperation scheme. We find that most of the throughput gains come from minimizing the second phase transmission time since the secondary nodes assist the primary communication during this phase. Finally, we demonstrate the superiority of our proposed scheme compared to a number of reference schemes that include best relay selection, dual-hop routing, and an interference channel model.

  1. Throughput Maximization for Cognitive Radio Networks Using Active Cooperation and Superposition Coding

    KAUST Repository

    Hamza, Doha R.; Park, Kihong; Alouini, Mohamed-Slim; Aissa, Sonia

    2015-01-01

    We propose a three-message superposition coding scheme in a cognitive radio relay network exploiting active cooperation between primary and secondary users. The primary user is motivated to cooperate by substantial benefits it can reap from this access scenario. Specifically, the time resource is split into three transmission phases: The first two phases are dedicated to primary communication, while the third phase is for the secondary’s transmission. We formulate two throughput maximization problems for the secondary network subject to primary user rate constraints and per-node power constraints with respect to the time durations of primary transmission and the transmit power of the primary and the secondary users. The first throughput maximization problem assumes a partial power constraint such that the secondary power dedicated to primary cooperation, i.e. for the first two communication phases, is fixed apriori. In the second throughput maximization problem, a total power constraint is assumed over the three phases of communication. The two problems are difficult to solve analytically when the relaying channel gains are strictly greater than each other and strictly greater than the direct link channel gain. However, mathematically tractable lowerbound and upperbound solutions can be attained for the two problems. For both problems, by only using the lowerbound solution, we demonstrate significant throughput gains for both the primary and the secondary users through this active cooperation scheme. We find that most of the throughput gains come from minimizing the second phase transmission time since the secondary nodes assist the primary communication during this phase. Finally, we demonstrate the superiority of our proposed scheme compared to a number of reference schemes that include best relay selection, dual-hop routing, and an interference channel model.

  2. Equivalence principles and electromagnetism

    Science.gov (United States)

    Ni, W.-T.

    1977-01-01

    The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.

  3. PL-1 program system for generalized Patterson superpositions. [PL1GEN, SYMPL1, and ALSPL1, in PL/1 for IBM 360/65 computer

    Energy Technology Data Exchange (ETDEWEB)

    Hubbard, C.R.; Babich, M.W.; Jacobson, R.A.

    1977-01-01

    A new system of three programs written in PL/1 can calculate symmetry and Patterson superposition maps for triclinic, monoclinic, and orthorhombic space groups as well as any space group reducible to one of these three. These programs are based on a system of FORTRAN programs developed at Ames Laboratory, but are more general and have expanded utility, especially with regard to large unit cells. The program PLIGEN calculates a direct access data set, SYMPL1 calculates a direct access symmetry map, and ALSPL1 calculates a superposition map using one or multiple superpositions. A detailed description of the use of these programs including symbolic program listings is included. 2 tables.

  4. The balance principle in scientific research.

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Wang, Qi

    2012-05-01

    The principles of balance, randomization, control and repetition, which are closely related, constitute the four principles of scientific research. The balance principle is the kernel of the four principles which runs through the other three. However, in scientific research, the balance principle is always overlooked. If the balance principle is not well performed, the research conclusion is easy to be denied, which may lead to the failure of the whole research. Therefore, it is essential to have a good command of the balance principle in scientific research. This article stresses the definition and function of the balance principle, the strategies and detailed measures to improve balance in scientific research, and the analysis of the common mistakes involving the use of the balance principle in scientific research.

  5. Synthetic Elucidation of Design Principles for Molecular Qubits

    Science.gov (United States)

    Graham, Michael James

    Quantum information processing (QIP) is an emerging computational paradigm with the potential to enable a vast increase in computational power, fundamentally transforming fields from structural biology to finance. QIP employs qubits, or quantum bits, as its fundamental units of information, which can exist in not just the classical states of 0 or 1, but in a superposition of the two. In order to successfully perform QIP, this superposition state must be sufficiently long-lived. One promising paradigm for the implementation of QIP involves employing unpaired electrons in coordination complexes as qubits. This architecture is highly tunable and scalable, however coordination complexes frequently suffer from short superposition lifetimes, or T2. In order to capitalize on the promise of molecular qubits, it is necessary to develop a set of design principles that allow the rational synthesis of complexes with sufficiently long values of T2. In this dissertation, I report efforts to use the synthesis of series of complexes to elucidate design principles for molecular qubits. Chapter 1 details previous work by our group and others in the field. Chapter 2 details the first efforts of our group to determine the impact of varying spin and spin-orbit coupling on T2. Chapter 3 examines the effect of removing nuclear spins on coherence time, and reports a series of vanadyl bis(dithiolene) complexes which exhibit extremely long coherence lifetimes, in excess of the 100 mus threshold for qubit viability. Chapters 4 and 5 form two complimentary halves of a study to determine the exact relationship between electronic spin-nuclear spin distance and the effect of the nuclear spins on T2. Finally, chapter 6 suggests next directions for the field as a whole, including the potential for work in this field to impact the development of other technologies as diverse as quantum sensors and magnetic resonance imaging contrast agents.

  6. The role and production of polar/subtropical jet superpositions in two high-impact weather events over North America

    Science.gov (United States)

    Winters, Andrew C.

    Careful observational work has demonstrated that the tropopause is typically characterized by a three-step pole-to-equator structure, with each break between steps in the tropopause height associated with a jet stream. While the two jet streams, the polar and subtropical jets, typically occupy different latitude bands, their separation can occasionally vanish, resulting in a vertical superposition of the two jets. A cursory examination of a number of historical and recent high-impact weather events over North America and the North Atlantic indicates that superposed jets can be an important component of their evolution. Consequently, this dissertation examines two recent jet superposition cases, the 18--20 December 2009 Mid-Atlantic Blizzard and the 1--3 May 2010 Nashville Flood, in an effort (1) to determine the specific influence that a superposed jet can have on the development of a high-impact weather event and (2) to illuminate the processes that facilitated the production of a superposition in each case. An examination of these cases from a basic-state variable and PV inversion perspective demonstrates that elements of both the remote and local synoptic environment are important to consider while diagnosing the development of a jet superposition. Specifically, the process of jet superposition begins with the remote production of a cyclonic (anticyclonic) tropopause disturbance at high (low) latitudes. The cyclonic circulation typically originates at polar latitudes, while organized tropical convection can encourage the development of an anticyclonic circulation anomaly within the tropical upper-troposphere. The concurrent advection of both anomalies towards middle latitudes subsequently allows their individual circulations to laterally displace the location of the individual tropopause breaks. Once the two circulation anomalies position the polar and subtropical tropopause breaks in close proximity to one another, elements within the local environment, such as

  7. Measuring multiple residual-stress components using the contour method and multiple cuts

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO

    2009-01-01

    The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.

  8. Accurate convolution/superposition for multi-resolution dose calculation using cumulative tabulated kernels

    International Nuclear Information System (INIS)

    Lu Weiguo; Olivera, Gustavo H; Chen Mingli; Reckwerdt, Paul J; Mackie, Thomas R

    2005-01-01

    Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm 2 ) and the 'narrow' (1.2 x 1.2 cm 2 ) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm

  9. Update heat exchanger designing principles

    International Nuclear Information System (INIS)

    Lipets, A.U.; Yampol'skij, A.E.

    1985-01-01

    Update heat exchanger design principles are analysed. Different coolant pattern in a heat exchanger are considered. It is suggested to rationally organize flow rates irregularity in it. Applying on heat exchanger designing measures on using really existing temperature and flow rate irregularities will permit to improve heat exchanger efficiency. It is expedient in some cases to artificially produce irregularities. In this connection some heat exchanger design principles must be reviewed now

  10. Evaluation of the Use of Home Blood Pressure Measurement Using Mobile Phone-Assisted Technology: The iVitality Proof-of-Principle Study

    NARCIS (Netherlands)

    Wijsman, L.W.; Richard, E.; Cachucho, R.; Craen, A.J. de; Jongstra, S.; Mooijaart, S.P.

    2016-01-01

    BACKGROUND: Mobile phone-assisted technologies provide the opportunity to optimize the feasibility of long-term blood pressure (BP) monitoring at home, with the potential of large-scale data collection. OBJECTIVE: In this proof-of-principle study, we evaluated the feasibility of home BP monitoring

  11. Evaluation of the Use of Home Blood Pressure Measurement Using Mobile Phone-Assisted Technology: The iVitality Proof-of-Principle Study

    NARCIS (Netherlands)

    Wijsman, Liselotte W.; Richard, Edo; Cachucho, Ricardo; de Craen, Anton J. M.; Jongstra, Susan; Mooijaart, Simon P.

    2016-01-01

    Mobile phone-assisted technologies provide the opportunity to optimize the feasibility of long-term blood pressure (BP) monitoring at home, with the potential of large-scale data collection. In this proof-of-principle study, we evaluated the feasibility of home BP monitoring using mobile

  12. Dielectric properties of agricultural products – fundamental principles, influencing factors, and measurement technirques. Chapter 4. Electrotechnologies for Food Processing: Book Series. Volume 3. Radio-Frequency Heating

    Science.gov (United States)

    In this chapter, definitions of dielectric properties, or permittivity, of materials and a brief discussion of the fundamental principles governing their behavior with respect to influencing factors are presented. The basic physics of the influence of frequency of the electric fields and temperatur...

  13. Violation of a Leggett–Garg inequality with ideal non-invasive measurements

    Science.gov (United States)

    Knee, George C.; Simmons, Stephanie; Gauger, Erik M.; Morton, John J.L.; Riemann, Helge; Abrosimov, Nikolai V.; Becker, Peter; Pohl, Hans-Joachim; Itoh, Kohei M.; Thewalt, Mike L.W.; Briggs, G. Andrew D.; Benjamin, Simon C.

    2012-01-01

    The quantum superposition principle states that an entity can exist in two different states simultaneously, counter to our 'classical' intuition. Is it possible to understand a given system's behaviour without such a concept? A test designed by Leggett and Garg can rule out this possibility. The test, originally intended for macroscopic objects, has been implemented in various systems. However to date no experiment has employed the 'ideal negative result' measurements that are required for the most robust test. Here we introduce a general protocol for these special measurements using an ancillary system, which acts as a local measuring device but which need not be perfectly prepared. We report an experimental realization using spin-bearing phosphorus impurities in silicon. The results demonstrate the necessity of a non-classical picture for this class of microscopic system. Our procedure can be applied to systems of any size, whether individually controlled or in a spatial ensemble. PMID:22215081

  14. Fundamental principles, measurement techniques and data analysis in a ion accelerator; Principios fundamentales, tecnicas de medicion y analisis de datos en un acelerador de iones

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez M, O. [Facultad de Ciencias, UNAM, Ciudad Universitaria, 04510 Mexico D. F. (Mexico); Gleason, C. [Facultad de Ciencias, Universidad Autonoma del Estado de Morelos, Cuernavaca, Morelos (Mexico); Hinojosa, G. [Instituto de Ciencias Fisicas, UNAM, Ciudad Universitaria, 04510 Mexico D. F. (Mexico)]. e-mail: hinojosa@fis.unam.mx

    2008-07-01

    The present work is intended to be a general reference for students and professionals interested in the field. Here, we present an introduction to the analysis techniques and fundamental principles for data processing and operation of a typical ion accelerator that operates in the low energy range. We also present a detailed description of the apparatus and propose new analysis methods for the results. In addition, we introduce illustrative simulations of the ion's trajectories in the different components of the apparatus performed with specialized software and, a new computer data acquisition and control interface. (Author)

  15. Maximum coherent superposition state achievement using a non-resonant pulse train in non-degenerate three-level atoms

    International Nuclear Information System (INIS)

    Deng, Li; Niu, Yueping; Jin, Luling; Gong, Shangqing

    2010-01-01

    The coherent superposition state of the lower two levels in non-degenerate three-level Λ atoms is investigated using the accumulative effects of non-resonant pulse trains when the repetition period is smaller than the decay time of the upper level. First, using a rectangular pulse train, the accumulative effects are re-examined in the non-resonant two-level atoms and the modified constructive accumulation equation is analytically given. The equation shows that the relative phase and the repetition period are important in the accumulative effect. Next, under the modified equation in the non-degenerate three-level Λ atoms, we show that besides the constructive accumulation effect, the use of the partial constructive accumulation effect can also achieve the steady state of the maximum coherent superposition state of the lower two levels and the latter condition is relatively easier to manipulate. The analysis is verified by numerical calculations. The influence of the external levels in such a case is also considered and we find that it can be avoided effectively. The above analysis is also applicable to pulse trains with arbitrary envelopes.

  16. Radiation protection principles

    International Nuclear Information System (INIS)

    Ismail Bahari

    2007-01-01

    The presentation outlines the aspects of radiation protection principles. It discussed the following subjects; radiation hazards and risk, the objectives of radiation protection, three principles of the system - justification of practice, optimization of protection and safety, dose limit

  17. Principles of project management

    Science.gov (United States)

    1982-01-01

    The basic principles of project management as practiced by NASA management personnel are presented. These principles are given as ground rules and guidelines to be used in the performance of research, development, construction or operational assignments.

  18. On the uncertainty principle. V

    International Nuclear Information System (INIS)

    Halpern, O.

    1976-01-01

    The treatment of ideal experiments connected with the uncertainty principle is continued. The author analyzes successively measurements of momentum and position, and discusses the common reason why the results in all cases differ from the conventional ones. A similar difference exists for the measurement of field strengths. The interpretation given by Weizsaecker, who tried to interpret Bohr's complementarity principle by introducing a multi-valued logic is analyzed. The treatment of the uncertainty principle ΔE Δt is deferred to a later paper as is the interpretation of the method of variation of constants. Every ideal experiment discussed shows various lower limits for the value of the uncertainty product which limits depend on the experimental arrangement and are always (considerably) larger than h. (Auth.)

  19. Principle of accelerator mass spectrometry

    International Nuclear Information System (INIS)

    Matsuzaki, Hiroyuki

    2007-01-01

    The principle of accelerator mass spectrometry (AMS) is described mainly on technical aspects: hardware construction of AMS, measurement of isotope ratio, sensitivity of measurement (measuring limit), measuring accuracy, and application of data. The content may be summarized as follows: rare isotope (often long-lived radioactive isotope) can be detected by various use of the ion energy obtained by the acceleration of ions, a measurable isotope ratio is one of rare isotope to abundant isotopes, and a measured value of isotope ratio is uncertainty to true one. Such a fact must be kept in mind on the use of AMS data to application research. (M.H.)

  20. Composite and case study analyses of the large-scale environments associated with West Pacific Polar and subtropical vertical jet superposition events

    Science.gov (United States)

    Handlos, Zachary J.

    Though considerable research attention has been devoted to examination of the Northern Hemispheric polar and subtropical jet streams, relatively little has been directed toward understanding the circumstances that conspire to produce the relatively rare vertical superposition of these usually separate features. This dissertation investigates the structure and evolution of large-scale environments associated with jet superposition events in the northwest Pacific. An objective identification scheme, using NCEP/NCAR Reanalysis 1 data, is employed to identify all jet superpositions in the west Pacific (30-40°N, 135-175°E) for boreal winters (DJF) between 1979/80 - 2009/10. The analysis reveals that environments conducive to west Pacific jet superposition share several large-scale features usually associated with East Asian Winter Monsoon (EAWM) northerly cold surges, including the presence of an enhanced Hadley Cell-like circulation within the jet entrance region. It is further demonstrated that several EAWM indices are statistically significantly correlated with jet superposition frequency in the west Pacific. The life cycle of EAWM cold surges promotes interaction between tropical convection and internal jet dynamics. Low potential vorticity (PV), high theta e tropical boundary layer air, exhausted by anomalous convection in the west Pacific lower latitudes, is advected poleward towards the equatorward side of the jet in upper tropospheric isentropic layers resulting in anomalous anticyclonic wind shear that accelerates the jet. This, along with geostrophic cold air advection in the left jet entrance region that drives the polar tropopause downward through the jet core, promotes the development of the deep, vertical PV wall characteristic of superposed jets. West Pacific jet superpositions preferentially form within an environment favoring the aforementioned characteristics regardless of EAWM seasonal strength. Post-superposition, it is shown that the west Pacific

  1. The certainty principle (review)

    OpenAIRE

    Arbatsky, D. A.

    2006-01-01

    The certainty principle (2005) allowed to conceptualize from the more fundamental grounds both the Heisenberg uncertainty principle (1927) and the Mandelshtam-Tamm relation (1945). In this review I give detailed explanation and discussion of the certainty principle, oriented to all physicists, both theorists and experimenters.

  2. Quantum Action Principle with Generalized Uncertainty Principle

    OpenAIRE

    Gu, Jie

    2013-01-01

    One of the common features in all promising candidates of quantum gravity is the existence of a minimal length scale, which naturally emerges with a generalized uncertainty principle, or equivalently a modified commutation relation. Schwinger's quantum action principle was modified to incorporate this modification, and was applied to the calculation of the kernel of a free particle, partly recovering the result previously studied using path integral.

  3. Dimensional cosmological principles

    International Nuclear Information System (INIS)

    Chi, L.K.

    1985-01-01

    The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle

  4. Thermionics basic principles of electronics

    CERN Document Server

    Jenkins, J; Ashhurst, W

    2013-01-01

    Basic Principles of Electronics, Volume I : Thermionics serves as a textbook for students in physics. It focuses on thermionic devices. The book covers topics on electron dynamics, electron emission, and the themionic vacuum diode and triode. Power amplifiers, oscillators, and electronic measuring equipment are studied as well. The text will be of great use to physics and electronics students, and inventors.

  5. Demonstrating Fermat's Principle in Optics

    Science.gov (United States)

    Paleiov, Orr; Pupko, Ofir; Lipson, S. G.

    2011-01-01

    We demonstrate Fermat's principle in optics by a simple experiment using reflection from an arbitrarily shaped one-dimensional reflector. We investigated a range of possible light paths from a lamp to a fixed slit by reflection in a curved reflector and showed by direct measurement that the paths along which light is concentrated have either…

  6. Principles of modern radar systems

    CERN Document Server

    Carpentier, Michel H

    1988-01-01

    Introduction to random functions ; signal and noise : the ideal receiver ; performance of radar systems equipped with ideal receivers ; analysis of the operating principles of some types of radar ; behavior of real targets, fluctuation of targets ; angle measurement using radar ; data processing of radar information, radar coverage ; applications to electronic scanning antennas to radar ; introduction to Hilbert spaces.

  7. Principles and applications of tribology

    CERN Document Server

    Moore, Desmond F

    1975-01-01

    Principles and Applications of Tribology provides a mechanical engineering perspective of the fundamental understanding and applications of tribology. This book is organized into two parts encompassing 16 chapters that cover the principles of friction and different types of lubrication. Chapter 1 deals with the immense scope of tribology and the range of applications in the existing technology, and Chapter 2 is devoted entirely to the evaluation and measurement of surface texture. Chapters 3 to 5 present the fundamental concepts underlying the friction of metals, elastomers, and other material

  8. General principles of quantum mechanics

    International Nuclear Information System (INIS)

    Pauli, W.

    1980-01-01

    This book is a textbook for a course in quantum mechanics. Starting from the complementarity and the uncertainty principle Schroedingers equation is introduced together with the operator calculus. Then stationary states are treated as eigenvalue problems. Furthermore matrix mechanics are briefly discussed. Thereafter the theory of measurements is considered. Then as approximation methods perturbation theory and the WKB approximation are introduced. Then identical particles, spin, and the exclusion principle are discussed. There after the semiclassical theory of radiation and the relativistic one-particle problem are discussed. Finally an introduction is given into quantum electrodynamics. (HSI)

  9. Theoretical aspects of the equivalence principle

    International Nuclear Information System (INIS)

    Damour, Thibault

    2012-01-01

    We review several theoretical aspects of the equivalence principle (EP). We emphasize the unsatisfactory fact that the EP maintains the absolute character of the coupling constants of physics, while general relativity and its generalizations (Kaluza–Klein, …, string theory) suggest that all absolute structures should be replaced by dynamical entities. We discuss the EP-violation phenomenology of dilaton-like models, which is likely to be dominated by the linear superposition of two effects: a signal proportional to the nuclear Coulomb energy, related to the variation of the fine-structure constant, and a signal proportional to the surface nuclear binding energy, related to the variation of the light quark masses. We recall various theoretical arguments (including a recently proposed anthropic argument) suggesting that the EP be violated at a small, but not unmeasurably small level. This motivates the need for improved tests of the EP. These tests are probing new territories in physics that are related to deep, and mysterious, issues in fundamental physics. (paper)

  10. The 4th Thermodynamic Principle?

    International Nuclear Information System (INIS)

    Montero Garcia, Jose de la Luz; Novoa Blanco, Jesus Francisco

    2007-01-01

    It should be emphasized that the 4th Principle above formulated is a thermodynamic principle and, at the same time, is mechanical-quantum and relativist, as it should inevitably be and its absence has been one of main the theoretical limitations of the physical theory until today.We show that the theoretical discovery of Dimensional Primitive Octet of Matter, the 4th Thermodynamic Principle, the Quantum Hexet of Matter, the Global Hexagonal Subsystem of Fundamental Constants of Energy and the Measurement or Connected Global Scale or Universal Existential Interval of the Matter is that it is possible to be arrived at a global formulation of the four 'forces' or fundamental interactions of nature. The Einstein's golden dream is possible

  11. Superconducting analogs of quantum optical phenomena: Macroscopic quantum superpositions and squeezing in a superconducting quantum-interference device ring

    International Nuclear Information System (INIS)

    Everitt, M.J.; Clark, T.D.; Stiffell, P.B.; Prance, R.J.; Prance, H.; Vourdas, A.; Ralph, J.F.

    2004-01-01

    In this paper we explore the quantum behavior of a superconducting quantum-interference device (SQUID) ring which has a significant Josephson coupling energy. We show that the eigenfunctions of the Hamiltonian for the ring can be used to create macroscopic quantum superposition states of the ring. We also show that the ring potential may be utilized to squeeze coherent states. With the SQUID ring as a strong contender as a device for manipulating quantum information, such properties may be of great utility in the future. However, as with all candidate systems for quantum technologies, decoherence is a fundamental problem. In this paper we apply an open systems approach to model the effect of coupling a quantum-mechanical SQUID ring to a thermal bath. We use this model to demonstrate the manner in which decoherence affects the quantum states of the ring

  12. Java application for the superposition T-matrix code to study the optical properties of cosmic dust aggregates

    Science.gov (United States)

    Halder, P.; Chakraborty, A.; Deb Roy, P.; Das, H. S.

    2014-09-01

    In this paper, we report the development of a java application for the Superposition T-matrix code, JaSTA (Java Superposition T-matrix App), to study the light scattering properties of aggregate structures. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precession superposition codes for multi-sphere clusters in random orientation developed by Mackowski and Mischenko (1996). It consists of a graphical user interface (GUI) in the front hand and a database of related data in the back hand. Both the interactive GUI and database package directly enable a user to model by self-monitoring respective input parameters (namely, wavelength, complex refractive indices, grain size, etc.) to study the related optical properties of cosmic dust (namely, extinction, polarization, etc.) instantly, i.e., with zero computational time. This increases the efficiency of the user. The database of JaSTA is now created for a few sets of input parameters with a plan to create a large database in future. This application also has an option where users can compile and run the scattering code directly for aggregates in GUI environment. The JaSTA aims to provide convenient and quicker data analysis of the optical properties which can be used in different fields like planetary science, atmospheric science, nano science, etc. The current version of this software is developed for the Linux and Windows platform to study the light scattering properties of small aggregates which will be extended for larger aggregates using parallel codes in future. Catalogue identifier: AETB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 571570 No. of bytes in distributed program

  13. Quantum properties of a superposition of squeezed displaced two-mode vacuum and single-photon states

    International Nuclear Information System (INIS)

    El-Orany, Faisal A A; Obada, A-S F; M Asker, Zafer; Perina, J

    2009-01-01

    In this paper, we study some quantum properties of a superposition of displaced squeezed two-mode vacuum and single-photon states, such as the second-order correlation function, the Cauchy-Schwarz inequality, quadrature squeezing, quasiprobability distribution functions and purity. These type of states include two mechanisms, namely interference in phase space and entanglement. We show that these states can exhibit sub-Poissonian statistics, squeezing and deviate from the classical Cauchy-Schwarz inequality. Moreover, the amount of entanglement in the system can be increased by increasing the squeezing mechanism. In the framework of the quasiprobability distribution functions, we show that the single-mode state can tend to the thermal state based on the correlation mechanism. A generation scheme for such states is given.

  14. Improvement of ozone yield by a multi-discharge type ozonizer using superposition of silent discharge plasma

    International Nuclear Information System (INIS)

    Song, Hyun-Jig; Chun, Byung-Joon; Lee, Kwang-Sik

    2004-01-01

    In order to improve ozone generation, we experimentally investigated the silent discharge plasma and ozone generation characteristics of a multi-discharge type ozonizer. Ozone in a multi-discharge type ozonizer is generated by superposition of a silent discharge plasma, which is simultaneously generated in separated discharge spaces. A multi-discharge type ozonizer is composed of three different kinds of superposed silent discharge type ozonizers, depending on the method of applying power to each electrode. We observed that the discharge period of the current pulse for a multi discharge type ozonizer can be longer than that of silent discharge type ozonizer with two electrodes and one gap. Hence, ozone generation is improved up to 17185 ppm and 783 g/kwh in the case of the superposed silent discharge type ozonizer for which an AC high voltages with a 180 .deg. phase difference were applied to the internal electrode and the external electrode, respectively, with the central electrode being grounded.

  15. Calculation of media temperatures for nuclear sources in geologic depositories by a finite-length line source superposition model (FLLSSM)

    Energy Technology Data Exchange (ETDEWEB)

    Kays, W M; Hossaini-Hashemi, F [Stanford Univ., Palo Alto, CA (USA). Dept. of Mechanical Engineering; Busch, J S [Kaiser Engineers, Oakland, CA (USA)

    1982-02-01

    A linearized transient thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high-level waste or spent fuel assemblies are represented as finite-length line sources in a continuous medium. The combined effects of multiple canisters in a representative storage pattern can be established in the medium at selected point of interest by superposition of the temperature rises calculated for each canister. A mathematical solution of the calculation for each separate source is given in this article, permitting a slow hand calculation. The full report, ONWI-94, contains the details of the computer code FLLSSM and its use, yielding the total solution in one computer output.

  16. Biomechanics principles and practices

    CERN Document Server

    Peterson, Donald R

    2014-01-01

    Presents Current Principles and ApplicationsBiomedical engineering is considered to be the most expansive of all the engineering sciences. Its function involves the direct combination of core engineering sciences as well as knowledge of nonengineering disciplines such as biology and medicine. Drawing on material from the biomechanics section of The Biomedical Engineering Handbook, Fourth Edition and utilizing the expert knowledge of respected published scientists in the application and research of biomechanics, Biomechanics: Principles and Practices discusses the latest principles and applicat

  17. Fusion research principles

    CERN Document Server

    Dolan, Thomas James

    2013-01-01

    Fusion Research, Volume I: Principles provides a general description of the methods and problems of fusion research. The book contains three main parts: Principles, Experiments, and Technology. The Principles part describes the conditions necessary for a fusion reaction, as well as the fundamentals of plasma confinement, heating, and diagnostics. The Experiments part details about forty plasma confinement schemes and experiments. The last part explores various engineering problems associated with reactor design, vacuum and magnet systems, materials, plasma purity, fueling, blankets, neutronics

  18. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  19. Principles of ecotoxicology

    National Research Council Canada - National Science Library

    Walker, C. H

    2012-01-01

    "Now in its fourth edition, this exceptionally accessible text provides students with a multidisciplinary perspective and a grounding in the fundamental principles required for research in toxicology today...

  20. APPLYING THE PRINCIPLES OF ACCOUNTING IN

    OpenAIRE

    NAGY CRISTINA MIHAELA; SABĂU CRĂCIUN; ”Tibiscus” University of Timişoara, Faculty of Economic Science

    2015-01-01

    The application of accounting principles (accounting principle on accrual basis; principle of business continuity; method consistency principle; prudence principle; independence principle; the principle of separate valuation of assets and liabilities; intangibility principle; non-compensation principle; the principle of substance over form; the principle of threshold significance) to companies that are in bankruptcy procedure has a number of particularities. Thus, some principl...

  1. Magnetic Field of Conductive Objects as Superposition of Elementary Eddy Currents and Eddy Current Tomography

    Science.gov (United States)

    Sukhanov, D. Ya.; Zav'yalova, K. V.

    2018-03-01

    The paper represents induced currents in an electrically conductive object as a totality of elementary eddy currents. The proposed scanning method includes measurements of only one component of the secondary magnetic field. Reconstruction of the current distribution is performed by deconvolution with regularization. Numerical modeling supported by the field experiments show that this approach is of direct practical relevance.

  2. Realistic limits on the nonlocality of an N-partite single-photon superposition

    DEFF Research Database (Denmark)

    Laghaout, Amine; Andersen, Ulrik Lund; Björk, Gunnar

    2011-01-01

    the nonlocal behavior previously thought to be exclusive to the more complex class of Greenberger-Horne-Zeilinger states. We show that in practice, however, the slightest decoherence or inefficiency of the Bell measurements on W states will degrade any violation margin gained by scaling to higher N...

  3. The genetic difference principle.

    Science.gov (United States)

    Farrelly, Colin

    2004-01-01

    In the newly emerging debates about genetics and justice three distinct principles have begun to emerge concerning what the distributive aim of genetic interventions should be. These principles are: genetic equality, a genetic decent minimum, and the genetic difference principle. In this paper, I examine the rationale of each of these principles and argue that genetic equality and a genetic decent minimum are ill-equipped to tackle what I call the currency problem and the problem of weight. The genetic difference principle is the most promising of the three principles and I develop this principle so that it takes seriously the concerns of just health care and distributive justice in general. Given the strains on public funds for other important social programmes, the costs of pursuing genetic interventions and the nature of genetic interventions, I conclude that a more lax interpretation of the genetic difference principle is appropriate. This interpretation stipulates that genetic inequalities should be arranged so that they are to the greatest reasonable benefit of the least advantaged. Such a proposal is consistent with prioritarianism and provides some practical guidance for non-ideal societies--that is, societies that do not have the endless amount of resources needed to satisfy every requirement of justice.

  4. The principle of equivalence

    International Nuclear Information System (INIS)

    Unnikrishnan, C.S.

    1994-01-01

    Principle of equivalence was the fundamental guiding principle in the formulation of the general theory of relativity. What are its key elements? What are the empirical observations which establish it? What is its relevance to some new experiments? These questions are discussed in this article. (author). 11 refs., 5 figs

  5. The Dutch premium principle

    NARCIS (Netherlands)

    van Heerwaarden, A.E.; Kaas, R.

    1992-01-01

    A premium principle is derived, in which the loading for a risk is the reinsurance loading for an excess-of-loss cover. It is shown that the principle is well-behaved in the sense that it results in larger premiums for risks that are larger in stop-loss order or in stochastic dominance.

  6. A new computing principle

    International Nuclear Information System (INIS)

    Fatmi, H.A.; Resconi, G.

    1988-01-01

    In 1954 while reviewing the theory of communication and cybernetics the late Professor Dennis Gabor presented a new mathematical principle for the design of advanced computers. During our work on these computers it was found that the Gabor formulation can be further advanced to include more recent developments in Lie algebras and geometric probability, giving rise to a new computing principle

  7. The anthropic principle

    International Nuclear Information System (INIS)

    Carr, B.J.

    1982-01-01

    The anthropic principle (the conjecture that certain features of the world are determined by the existence of Man) is discussed with the listing of the objections, and is stated that nearly all the constants of nature may be determined by the anthropic principle which does not give exact values for the constants but only their orders of magnitude. (J.T.)

  8. DC-pass filter design with notch filters superposition for CPW rectenna at low power level

    Science.gov (United States)

    Rivière, J.; Douyère, A.; Alicalapa, F.; Luk, J.-D. Lan Sun

    2016-03-01

    In this paper the challenging coplanar waveguide direct current (DC) pass filter is designed, analysed, fabricated and measured. As the ground plane and the conductive line are etched on the same plane, this technology allows the connection of series and shunt elements to the active devices without via holes through the substrate. Indeed, this study presents the first step in the optimization of a complete rectenna in coplanar waveguide (CPW) technology: key element of a radio frequency (RF) energy harvesting system. The measurement of the proposed filter shows good performance in the rejection of F0=2.45 GHz and F1=4.9 GHz. Additionally, a harmonic balance (HB) simulation of the complete rectenna is performed and shows a maximum RF-to-DC conversion efficiency of 37% with the studied DC-pass filter for an input power of 10 µW at 2.45 GHz.

  9. Mach's holographic principle

    International Nuclear Information System (INIS)

    Khoury, Justin; Parikh, Maulik

    2009-01-01

    Mach's principle is the proposition that inertial frames are determined by matter. We put forth and implement a precise correspondence between matter and geometry that realizes Mach's principle. Einstein's equations are not modified and no selection principle is applied to their solutions; Mach's principle is realized wholly within Einstein's general theory of relativity. The key insight is the observation that, in addition to bulk matter, one can also add boundary matter. Given a space-time, and thus the inertial frames, we can read off both boundary and bulk stress tensors, thereby relating matter and geometry. We consider some global conditions that are necessary for the space-time to be reconstructible, in principle, from bulk and boundary matter. Our framework is similar to that of the black hole membrane paradigm and, in asymptotically anti-de Sitter space-times, is consistent with holographic duality.

  10. Variational principles in physics

    CERN Document Server

    Basdevant, Jean-Louis

    2007-01-01

    Optimization under constraints is an essential part of everyday life. Indeed, we routinely solve problems by striking a balance between contradictory interests, individual desires and material contingencies. This notion of equilibrium was dear to thinkers of the enlightenment, as illustrated by Montesquieu’s famous formulation: "In all magistracies, the greatness of the power must be compensated by the brevity of the duration." Astonishingly, natural laws are guided by a similar principle. Variational principles have proven to be surprisingly fertile. For example, Fermat used variational methods to demonstrate that light follows the fastest route from one point to another, an idea which came to be known as Fermat’s principle, a cornerstone of geometrical optics. Variational Principles in Physics explains variational principles and charts their use throughout modern physics. The heart of the book is devoted to the analytical mechanics of Lagrange and Hamilton, the basic tools of any physicist. Prof. Basdev...

  11. Selective conversion of plasma glucose into CO2 by Saccharomyces cerevisiae for the measurement of C-13 abundance by isotope ratio mass spectrometry : proof of principle

    NARCIS (Netherlands)

    Rembacz, Krzysztof P.; Faber, Klaas Nico; Stellaard, Frans

    2007-01-01

    To study carbohydrate digestion and glucose absorption, time-dependent C-13 enrichment in plasma glucose is measured after oral administration of naturally occurring C-13-enriched carbohydrates. The isotope enrichment of the administered carbohydrate is low (APE <0.1%) and plasma C-13 glucose

  12. How to measure responses of the knee to lateral perturbations during gait? A proof-of-principle for quantification of knee instability.

    Science.gov (United States)

    van den Noort, Josien C; Sloot, Lizeth H; Bruijn, Sjoerd M; Harlaar, Jaap

    2017-08-16

    Knee instability is a major problem in patients with anterior cruciate ligament injury or knee osteoarthritis. A valid and clinically meaningful measure for functional knee instability is lacking. The concept of the gait sensitivity norm, the normalized perturbation response of a walking system to external perturbations, could be a sensible way to quantify knee instability. The aim of this study is to explore the feasibility of this concept for measurement of knee responses, using controlled external perturbations during walking in healthy subjects. Nine young healthy participants walked on a treadmill, while three dimensional kinematics were measured. Sudden lateral translations of the treadmill were applied at five different intensities during stance. Right knee kinematic responses and spatio-temporal parameters were tracked for the perturbed stride and following four cycles, to calculate perturbation response and gait sensitivity norm values (i.e. response/perturbation) in various ways. The perturbation response values in terms of knee flexion and abduction increased with perturbation intensity and decreased with an increased number of steps after perturbation. For flexion and ab/adduction during midswing, the gait sensitivity norm values were shown to be constant over perturbation intensities, demonstrating the potential of the gait sensitivity norm as a robust measure of knee responses to perturbations. These results show the feasibility of using the gait sensitivity norm concept for certain gait indicators based on kinematics of the knee, as a measure of responses during perturbed gait. The current findings in healthy subjects could serve as reference-data to quantify pathological knee instability. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. SU-F-T-377: Monte Carlo Re-Evaluation of Volumetric-Modulated Arc Plans of Advanced Stage Nasopharygeal Cancers Optimized with Convolution-Superposition Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, K; Leung, R; Law, G; Wong, M; Lee, V; Tung, S; Cheung, S; Chan, M [Tuen Mun Hospital, Hong Kong (Hong Kong)

    2016-06-15

    Background: Commercial treatment planning system Pinnacle3 (Philips, Fitchburg, WI, USA) employs a convolution-superposition algorithm for volumetric-modulated arc radiotherapy (VMAT) optimization and dose calculation. Study of Monte Carlo (MC) dose recalculation of VMAT plans for advanced-stage nasopharyngeal cancers (NPC) is currently limited. Methods: Twenty-nine VMAT prescribed 70Gy, 60Gy, and 54Gy to the planning target volumes (PTVs) were included. These clinical plans achieved with a CS dose engine on Pinnacle3 v9.0 were recalculated by the Monaco TPS v5.0 (Elekta, Maryland Heights, MO, USA) with a XVMC-based MC dose engine. The MC virtual source model was built using the same measurement beam dataset as for the Pinnacle beam model. All MC recalculation were based on absorbed dose to medium in medium (Dm,m). Differences in dose constraint parameters per our institution protocol (Supplementary Table 1) were analyzed. Results: Only differences in maximum dose to left brachial plexus, left temporal lobe and PTV54Gy were found to be statistically insignificant (p> 0.05). Dosimetric differences of other tumor targets and normal organs are found in supplementary Table 1. Generally, doses outside the PTV in the normal organs are lower with MC than with CS. This is also true in the PTV54-70Gy doses but higher dose in the nasal cavity near the bone interfaces is consistently predicted by MC, possibly due to the increased backscattering of short-range scattered photons and the secondary electrons that is not properly modeled by the CS. The straight shoulders of the PTV dose volume histograms (DVH) initially resulted from the CS optimization are merely preserved after MC recalculation. Conclusion: Significant dosimetric differences in VMAT NPC plans were observed between CS and MC calculations. Adjustments of the planning dose constraints to incorporate the physics differences from conventional CS algorithm should be made when VMAT optimization is carried out directly

  14. Principles of application of mechanical design measures to control severe accident phenomena, applied to the melt retention concept of the EPR

    International Nuclear Information System (INIS)

    Bittermann, D.

    2000-01-01

    To retain and stabilize a core melt within the containment, the phenomena which principally have to be dealt with are related to melt discharge, spreading, retention and cooling, plus specific phenomena like melt dispersal and ex-vessel melt water interaction. For the elaboration of mechanical design measures provided to stabilize a melt within the containment, boundary conditions may occur which could pose extremely high thermal and mechanical loads on the structures. This file describes an approach characterized by the idea to influence the course of severe accident scenarios as much as possible in order to generate boundary conditions for mitigation means ''by design'', which enables the development of a mitigation concept with maximum confidence in the effectiveness of the measures provided. (orig.)

  15. The Same Story or a Unique Novel? Within-Participant Principle Component Analysis of Training Load Measures in Professional Rugby Union Skills Training.

    Science.gov (United States)

    Weaving, Dan; Dalton, Nicholas E; Black, Christopher; Darrall-Jones, Joshua; Phibbs, Padraic J; Gray, Michael; Jones, Ben; Roe, Gregory A B

    2018-03-27

    The study aimed to identify which combination of external and internal training load (TL) metrics capture similar or unique information for individual professional players during skills training in rugby union using principal component analysis (PCA). TL data were collected from twenty-one male professional rugby union players across a competitive season. This included PlayerLoad™, total distance (TD), and individualised high-speed distance (HSD; >61% maximal velocity; all external TL) obtained from a micro-technology device worn by each player (Optimeye X4, Catapult Innovations, Melbourne, Australia) and the session-rating of perceived exertion (sRPE; internal TL). PCA was conducted on each individual to extract the underlying combinations of the four TL measures that best describe the total information (variance) provided by the measures. TL measures with PC "loadings" (PC L ) above 0.7 were deemed to possess well-defined relationships with the extracted PC. The findings show that from the four TL measures, the majority of an individual's TL information (1 st PC: 55 to 70%) during skills training can be explained by either sRPE (PC L : 0.72 to 0.95), TD (PC L : 0.86 to 0.98) or PlayerLoad™ (PC L : 0.71 to 0.98). HSD was the only variable to relate to the 2nd PC (PC L : 0.72 to 1.00), which captured additional TL information (+19 to 28%). Findings suggest practitioners could quantify the TL of rugby union skills training with one of PlayerLoad™, TD, or sRPE plus HSD whilst limiting omitted information of the TL imposed during professional rugby union skills training.

  16. Fundamental principles of a new EM tool for in-situ resistivity measurement; Denji yudoho ni yoru den`ichi hiteiko sokutei sochi no kento

    Energy Technology Data Exchange (ETDEWEB)

    Noguchi, K; Aoki, H [Waseda University, Tokyo (Japan). School of Science and Engineering; Saito, A [Mitsui Mineral Development Engineering Co. Ltd., Tokyo (Japan)

    1996-05-01

    For the purpose of measuring in-situ resistivity without contact with the rock, a study was made about a measuring device using electromagnetic induction. This measuring device has two concentric transmission loops and a receiving point at the center of the loops, and performs focusing by canceling the primary magnetic field at the receiving point. Using this device, a trial was made to eliminate the influence of surface undulation. In the model calculation, response was calculated after the structure with a heavily undulated ground surface was replaced by a two-layer structure with the first layer provided with a higher resistivity. In the model, the first layer had a resistivity of 10000 Ohm m, and the second layer 1000 Ohm m. Using the ratio between the transmission loop radii as a parameter, relationship with the thickness of the first layer was studied, and it was found that the sensitivity to the second layer resistivity increases when the inner and outer loops are nearer to each other in terms of radius and that this eliminates the influence near the surface layer. A decision needs to fall within a scope assuring good reception because response intensity decreases as the ratio between the transmission loop radii approaches 1. 3 refs., 11 figs.

  17. Attempt to model the edge turbulence of a tokamak as a random superposition of eddies

    Energy Technology Data Exchange (ETDEWEB)

    Endler, M; Theimer, G; Weinlich, M; Carlson, A; Giannone, L.; Niedermeyer, H; Rudyj, A [Max-Planck-Institut fuer Plasmaphysik, Garching (Germany)

    1993-12-31

    Turbulence is considered to be the most likely origin of the anomalous transport in tokamaks. Although the main interest is focussed on the bulk plasma, transport in the scrape-off layer is very important for reactor design. For this reason extensive experimental investigations of the edge turbulence were performed on the ASDEX divertor tokamak. Langmuir probe arrays were used in the floating potential mode and in the ion saturation mode to measure the poloidal distribution of density and plasma potential fluctuations neglecting temperature fluctuations. Density fluctuations integrated radially over the boundary layer were derived from H{sub {alpha}}-measurements. Data from up to 16 channels were sampled with a frequency of 1 MHz during time windows of 1 s. Often one parameter like the plasma density or the radial probe position were scanned during this interval. It is impossible to derive physical mechanisms directly from these statistical observations. We draw general conclusions about the physics involved from the entity of observations and propose a set of basic effects to include in a theoretical model. Being still unable to solve the complex nonlinear problem of the fully developed turbulence exactly we attempt to describe the turbulence with a simple non-self-consistent statistical model. This allows to derive plausible physical interpretations of several features of the statistical functions and may be used as a guide-line for the development of a manageable theoretical model. (author) 6 refs., 3 figs.

  18. Realistic limits on the nonlocality of an N-partite single-photon superposition

    Energy Technology Data Exchange (ETDEWEB)

    Laghaout, Amine [Department of Physics, Technical University of Denmark, Building 309, DK-2800 Lyngby (Denmark); Bjoerk, Gunnar [Department of Applied Physics, Royal Institute of Technology (KTH), AlbaNova University Center, SE-106 91 Stockholm (Sweden); NORDITA, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden); Andersen, Ulrik L. [Department of Physics, Technical University of Denmark, Building 309, DK-2800 Lyngby (Denmark); NORDITA, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden)

    2011-12-15

    A recent paper [L. Heaney, A. Cabello, M. F. Santos, and V. Vedral, New J. Phys. 13, 053054 (2011)] revealed that a single quantum symmetrically delocalized over N modes, namely a W state, effectively allows for all-versus-nothing proofs of nonlocality in the limit of large N. Ideally, this finding opens up the possibility of using the robustness of the W states while realizing the nonlocal behavior previously thought to be exclusive to the more complex class of Greenberger-Horne-Zeilinger states. We show that in practice, however, the slightest decoherence or inefficiency of the Bell measurements on W states will degrade any violation margin gained by scaling to higher N. The nonstatistical demonstration of nonlocality is thus proved to be impossible in any realistic experiment.

  19. Limitations of Boltzmann's principle

    International Nuclear Information System (INIS)

    Lavenda, B.H.

    1995-01-01

    The usual form of Boltzmann's principle assures that maximum entropy, or entropy reduction, occurs with maximum probability, implying a unimodal distribution. Boltzmann's principle cannot be applied to nonunimodal distributions, like the arcsine law, because the entropy may be concave only over a limited portion of the interval. The method of subordination shows that the arcsine distribution corresponds to a process with a single degree of freedom, thereby confirming the invalidation of Boltzmann's principle. The fractalization of time leads to a new distribution in which arcsine and Cauchy distributions can coexist simultaneously for nonintegral degrees of freedom between √2 and 2

  20. Biomedical engineering principles

    CERN Document Server

    Ritter, Arthur B; Valdevit, Antonio; Ascione, Alfred N

    2011-01-01

    Introduction: Modeling of Physiological ProcessesCell Physiology and TransportPrinciples and Biomedical Applications of HemodynamicsA Systems Approach to PhysiologyThe Cardiovascular SystemBiomedical Signal ProcessingSignal Acquisition and ProcessingTechniques for Physiological Signal ProcessingExamples of Physiological Signal ProcessingPrinciples of BiomechanicsPractical Applications of BiomechanicsBiomaterialsPrinciples of Biomedical Capstone DesignUnmet Clinical NeedsEntrepreneurship: Reasons why Most Good Designs Never Get to MarketAn Engineering Solution in Search of a Biomedical Problem

  1. Modern electronic maintenance principles

    CERN Document Server

    Garland, DJ

    2013-01-01

    Modern Electronic Maintenance Principles reviews the principles of maintaining modern, complex electronic equipment, with emphasis on preventive and corrective maintenance. Unfamiliar subjects such as the half-split method of fault location, functional diagrams, and fault finding guides are explained. This book consists of 12 chapters and begins by stressing the need for maintenance principles and discussing the problem of complexity as well as the requirements for a maintenance technician. The next chapter deals with the connection between reliability and maintenance and defines the terms fai

  2. [Bioethics of principles].

    Science.gov (United States)

    Pérez-Soba Díez del Corral, Juan José

    2008-01-01

    Bioethics emerges about the tecnological problems of acting in human life. Emerges also the problem of the moral limits determination, because they seem exterior of this practice. The Bioethics of Principles, take his rationality of the teleological thinking, and the autonomism. These divergence manifest the epistemological fragility and the great difficulty of hmoralñ thinking. This is evident in the determination of autonomy's principle, it has not the ethical content of Kant's propose. We need a new ethic rationality with a new refelxion of new Principles whose emerges of the basic ethic experiences.

  3. Principles of dynamics

    CERN Document Server

    Hill, Rodney

    2013-01-01

    Principles of Dynamics presents classical dynamics primarily as an exemplar of scientific theory and method. This book is divided into three major parts concerned with gravitational theory of planetary systems; general principles of the foundations of mechanics; and general motion of a rigid body. Some of the specific topics covered are Keplerian Laws of Planetary Motion; gravitational potential and potential energy; and fields of axisymmetric bodies. The principles of work and energy, fictitious body-forces, and inertial mass are also looked into. Other specific topics examined are kinematics

  4. Hamilton's principle for beginners

    International Nuclear Information System (INIS)

    Brun, J L

    2007-01-01

    I find that students have difficulty with Hamilton's principle, at least the first time they come into contact with it, and therefore it is worth designing some examples to help students grasp its complex meaning. This paper supplies the simplest example to consolidate the learning of the quoted principle: that of a free particle moving along a line. Next, students are challenged to add gravity to reinforce the argument and, finally, a two-dimensional motion in a vertical plane is considered. Furthermore these examples force us to be very clear about such an abstract principle

  5. Developing principles of growth

    DEFF Research Database (Denmark)

    Neergaard, Helle; Fleck, Emma

    of the principles of growth among women-owned firms. Using an in-depth case study methodology, data was collected from women-owned firms in Denmark and Ireland, as these countries are similar in contextual terms, e.g. population and business composition, dominated by micro, small and medium-sized enterprises....... Extending on principles put forward in effectuation theory, we propose that women grow their firms according to five principles which enable women’s enterprises to survive in the face of crises such as the current financial world crisis....

  6. A New Principle in Physics: the Principle 'Finiteness', and Some Consequences

    International Nuclear Information System (INIS)

    Sternlieb, Abraham

    2010-01-01

    In this paper I propose a new principle in physics: the principle of 'finiteness'. It stems from the definition of physics as a science that deals (among other things) with measurable dimensional physical quantities. Since measurement results, including their errors, are always finite, the principle of finiteness postulates that the mathematical formulation of 'legitimate' laws of physics should prevent exactly zero or infinite solutions. Some consequences of the principle of finiteness are discussed, in general, and then more specifically in the fields of special relativity, quantum mechanics, and quantum gravity. The consequences are derived independently of any other theory or principle in physics. I propose 'finiteness' as a postulate (like the constancy of the speed of light in vacuum, 'c'), as opposed to a notion whose validity has to be corroborated by, or derived theoretically or experimentally from other facts, theories, or principles.

  7. A potentiostat featuring an integrator transimpedance amplifier for the measurement of very low currents—Proof-of-principle application in microfluidic separations and voltammetry

    Science.gov (United States)

    Koutilellis, G. D.; Economou, A.; Efstathiou, C. E.

    2016-03-01

    This work reports the design and construction of a novel potentiostat which features an integrator transimpedance amplifier as a current-monitoring unit. The integration approach addresses the limitations of the feedback resistor approach used for current monitoring in conventional potentiostat designs. In the present design, measurement of the current is performed by a precision switched integrator transimpedance amplifier operated in the dual sampling mode which enables sub-pA resolution. The potentiostat is suitable for measuring very low currents (typical dynamic range: 5 pA-4.7 μA) with a 16 bit resolution, and it can support 2-, 3- and 4-electrode cell configurations. Its operation was assessed by using it as a detection module in a home-made capillary electrophoresis system for the separation and amperometric detection of paracetamol and p-aminophenol at a 3-electrode microfluidic chip. The potential and limitations of the proposed potentiostat to implement fast potential-scan voltammetric techniques were demonstrated for the case of cyclic voltammetry.

  8. A potentiostat featuring an integrator transimpedance amplifier for the measurement of very low currents--Proof-of-principle application in microfluidic separations and voltammetry.

    Science.gov (United States)

    Koutilellis, G D; Economou, A; Efstathiou, C E

    2016-03-01

    This work reports the design and construction of a novel potentiostat which features an integrator transimpedance amplifier as a current-monitoring unit. The integration approach addresses the limitations of the feedback resistor approach used for current monitoring in conventional potentiostat designs. In the present design, measurement of the current is performed by a precision switched integrator transimpedance amplifier operated in the dual sampling mode which enables sub-pA resolution. The potentiostat is suitable for measuring very low currents (typical dynamic range: 5 pA-4.7 μA) with a 16 bit resolution, and it can support 2-, 3- and 4-electrode cell configurations. Its operation was assessed by using it as a detection module in a home-made capillary electrophoresis system for the separation and amperometric detection of paracetamol and p-aminophenol at a 3-electrode microfluidic chip. The potential and limitations of the proposed potentiostat to implement fast potential-scan voltammetric techniques were demonstrated for the case of cyclic voltammetry.

  9. Rapid automated superposition of shapes and macromolecular models using spherical harmonics.

    Science.gov (United States)

    Konarev, Petr V; Petoukhov, Maxim V; Svergun, Dmitri I

    2016-06-01

    A rapid algorithm to superimpose macromolecular models in Fourier space is proposed and implemented ( SUPALM ). The method uses a normalized integrated cross-term of the scattering amplitudes as a proximity measure between two three-dimensional objects. The reciprocal-space algorithm allows for direct matching of heterogeneous objects including high- and low-resolution models represented by atomic coordinates, beads or dummy residue chains as well as electron microscopy density maps and inhomogeneous multi-phase models ( e.g. of protein-nucleic acid complexes). Using spherical harmonics for the computation of the amplitudes, the method is up to an order of magnitude faster than the real-space algorithm implemented in SUPCOMB by Kozin & Svergun [ J. Appl. Cryst. (2001 ▸), 34 , 33-41]. The utility of the new method is demonstrated in a number of test cases and compared with the results of SUPCOMB . The spherical harmonics algorithm is best suited for low-resolution shape models, e.g . those provided by solution scattering experiments, but also facilitates a rapid cross-validation against structural models obtained by other methods.

  10. Design and manufacture of a three-counter channel system based on delayed coincidence principle using for 22'3Ra and 224Ra measurements

    International Nuclear Information System (INIS)

    Tuong Thi Thu Huong; Pham Ngoc Tuan; Dang Hong Ngoc Quy; Truong Van Dat; Tran Anh Khoi; Chau Thi Nhu Quynh

    2016-01-01

    The research group has designed and fabricated a radiation detection system for measuring low activities of 223 Ra and 224 Ra in natural waters based on a design of Giffin et al (1963). Samples are obtained by adsorbing 223 Ra and 224 Ra onto a column of MnO 2 coated fiber (Mn fiber). The short-lived Rn daughters of 223 Ra and 224 Ra which recoil from the Mn fiber are swept into a scintillation detector where alpha decays of Rn and Po occur. Signals from the detector are sent to a delayed coincidence circuit which discriminates decay of the 224 Ra daughters, 220 Ra and 216 Po, from decays of the 223 Ra daughters, 219 Ra and 215 Po. The main product of this project is a “Low Alpha counting system” based on digital technology. This system consists of some main electronic circuit such as amplifier, single channel analyzer, counters/timers, micro-processor, RS232 interfacing. Almost of mentioned-above components have been designed and fabricated using ISE 10.1 software toolkits from Xilinx. The application program for controlling and collecting data from the device is written in LabView. In comparison with conventional analog circuits, the design of this system is smaller and easy to use owing to being connected to personal computer through RS232 interface in order to data acquisition and processing. This is also a new trend in the field of development of nuclear equipment with the aim to simple design, cost-saving (Reuse of hardware components can further reduce the system development cost), flexible (arbitrarily adjust measurement parameters by setting parameters from software), user-Friendly Environment (program directly embedded into the FPGA). (author)

  11. Downlink Cooperative Broadcast Transmission Based on Superposition Coding in a Relaying System for Future Wireless Sensor Networks.

    Science.gov (United States)

    Liu, Yang; Han, Guangjie; Shi, Sulong; Li, Zhengquan

    2018-06-20

    This study investigates the superiority of cooperative broadcast transmission over traditional orthogonal schemes when applied in a downlink relaying broadcast channel (RBC). Two proposed cooperative broadcast transmission protocols, one with an amplify-and-forward (AF) relay, and the other with a repetition-based decode-and-forward (DF) relay, are investigated. By utilizing superposition coding (SupC), the source and the relay transmit the private user messages simultaneously instead of sequentially as in traditional orthogonal schemes, which means the channel resources are reused and an increased channel degree of freedom is available to each user, hence the half-duplex penalty of relaying is alleviated. To facilitate a performance evaluation, theoretical outage probability expressions of the two broadcast transmission schemes are developed, based on which, we investigate the minimum total power consumption of each scheme for a given traffic requirement by numerical simulation. The results provide details on the overall system performance and fruitful insights on the essential characteristics of cooperative broadcast transmission in RBCs. It is observed that better overall outage performances and considerable power gains can be obtained by utilizing cooperative broadcast transmissions compared to traditional orthogonal schemes.

  12. Quantum-phase dynamics of two-component Bose-Einstein condensates: Collapse-revival of macroscopic superposition states

    International Nuclear Information System (INIS)

    Nakano, Masayoshi; Kishi, Ryohei; Ohta, Suguru; Takahashi, Hideaki; Furukawa, Shin-ichi; Yamaguchi, Kizashi

    2005-01-01

    We investigate the long-time dynamics of two-component dilute gas Bose-Einstein condensates with relatively different two-body interactions and Josephson couplings between the two components. Although in certain parameter regimes the quantum state of the system is known to evolve into macroscopic superposition, i.e., Schroedinger cat state, of two states with relative atom number differences between the two components, the Schroedinger cat state is also found to repeat the collapse and revival behavior in the long-time region. The dynamical behavior of the Pegg-Barnett phase difference between the two components is shown to be closely connected with the dynamics of the relative atom number difference for different parameters. The variation in the relative magnitude between the Josephson coupling and intra- and inter-component two-body interaction difference turns out to significantly change not only the size of the Schroedinger cat state but also its collapse-revival period, i.e., the lifetime of the Schroedinger cat state

  13. Improved automatic estimation of winds at the cloud top of Venus using superposition of cross-correlation surfaces

    Science.gov (United States)

    Ikegawa, Shinichi; Horinouchi, Takeshi

    2016-06-01

    Accurate wind observation is a key to study atmospheric dynamics. A new automated cloud tracking method for the dayside of Venus is proposed and evaluated by using the ultraviolet images obtained by the Venus Monitoring Camera onboard the Venus Express orbiter. It uses multiple images obtained successively over a few hours. Cross-correlations are computed from the pair combinations of the images and are superposed to identify cloud advection. It is shown that the superposition improves the accuracy of velocity estimation and significantly reduces false pattern matches that cause large errors. Two methods to evaluate the accuracy of each of the obtained cloud motion vectors are proposed. One relies on the confidence bounds of cross-correlation with consideration of anisotropic cloud morphology. The other relies on the comparison of two independent estimations obtained by separating the successive images into two groups. The two evaluations can be combined to screen the results. It is shown that the accuracy of the screened vectors are very high to the equatorward of 30 degree, while it is relatively low at higher latitudes. Analysis of them supports the previously reported existence of day-to-day large-scale variability at the cloud deck of Venus, and it further suggests smaller-scale features. The product of this study is expected to advance the dynamics of venusian atmosphere.

  14. Vaccinology: principles and practice

    National Research Council Canada - National Science Library

    Morrow, John

    2012-01-01

    ... principles to implementation. This is an authoritative textbook that details a comprehensive and systematic approach to the science of vaccinology focusing on not only basic science, but the many stages required to commercialize...

  15. On the invariance principle

    Energy Technology Data Exchange (ETDEWEB)

    Moller-Nielsen, Thomas [University of Oxford (United Kingdom)

    2014-07-01

    Physicists and philosophers have long claimed that the symmetries of our physical theories - roughly speaking, those transformations which map solutions of the theory into solutions - can provide us with genuine insight into what the world is really like. According to this 'Invariance Principle', only those quantities which are invariant under a theory's symmetries should be taken to be physically real, while those quantities which vary under its symmetries should not. Physicists and philosophers, however, are generally divided (or, indeed, silent) when it comes to explaining how such a principle is to be justified. In this paper, I spell out some of the problems inherent in other theorists' attempts to justify this principle, and sketch my own proposed general schema for explaining how - and when - the Invariance Principle can indeed be used as a legitimate tool of metaphysical inference.

  16. Principles of applied statistics

    National Research Council Canada - National Science Library

    Cox, D. R; Donnelly, Christl A

    2011-01-01

    .... David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation...

  17. Minimum entropy production principle

    Czech Academy of Sciences Publication Activity Database

    Maes, C.; Netočný, Karel

    2013-01-01

    Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle

  18. Global ethics and principlism.

    Science.gov (United States)

    Gordon, John-Stewart

    2011-09-01

    This article examines the special relation between common morality and particular moralities in the four-principles approach and its use for global ethics. It is argued that the special dialectical relation between common morality and particular moralities is the key to bridging the gap between ethical universalism and relativism. The four-principles approach is a good model for a global bioethics by virtue of its ability to mediate successfully between universal demands and cultural diversity. The principle of autonomy (i.e., the idea of individual informed consent), however, does need to be revised so as to make it compatible with alternatives such as family- or community-informed consent. The upshot is that the contribution of the four-principles approach to global ethics lies in the so-called dialectical process and its power to deal with cross-cultural issues against the background of universal demands by joining them together.

  19. Microprocessors principles and applications

    CERN Document Server

    Debenham, Michael J

    1979-01-01

    Microprocessors: Principles and Applications deals with the principles and applications of microprocessors and covers topics ranging from computer architecture and programmed machines to microprocessor programming, support systems and software, and system design. A number of microprocessor applications are considered, including data processing, process control, and telephone switching. This book is comprised of 10 chapters and begins with a historical overview of computers and computing, followed by a discussion on computer architecture and programmed machines, paying particular attention to t

  20. Microwave system engineering principles

    CERN Document Server

    Raff, Samuel J

    1977-01-01

    Microwave System Engineering Principles focuses on the calculus, differential equations, and transforms of microwave systems. This book discusses the basic nature and principles that can be derived from thermal noise; statistical concepts and binomial distribution; incoherent signal processing; basic properties of antennas; and beam widths and useful approximations. The fundamentals of propagation; LaPlace's Equation and Transmission Line (TEM) waves; interfaces between homogeneous media; modulation, bandwidth, and noise; and communications satellites are also deliberated in this text. This bo

  1. Electrical and electronic principles

    CERN Document Server

    Knight, SA

    1988-01-01

    Electrical and Electronic Principles, 3 focuses on the principles involved in electrical and electronic circuits, including impedance, inductance, capacitance, and resistance.The book first deals with circuit elements and theorems, D.C. transients, and the series circuits of alternating current. Discussions focus on inductance and resistance in series, resistance and capacitance in series, power factor, impedance, circuit magnification, equation of charge, discharge of a capacitor, transfer of power, and decibels and attenuation. The manuscript then examines the parallel circuits of alternatin

  2. Fundamental safety principles. Safety fundamentals

    International Nuclear Information System (INIS)

    2006-01-01

    This publication states the fundamental safety objective and ten associated safety principles, and briefly describes their intent and purpose. The fundamental safety objective - to protect people and the environment from harmful effects of ionizing radiation - applies to all circumstances that give rise to radiation risks. The safety principles are applicable, as relevant, throughout the entire lifetime of all facilities and activities - existing and new - utilized for peaceful purposes, and to protective actions to reduce existing radiation risks. They provide the basis for requirements and measures for the protection of people and the environment against radiation risks and for the safety of facilities and activities that give rise to radiation risks, including, in particular, nuclear installations and uses of radiation and radioactive sources, the transport of radioactive material and the management of radioactive waste

  3. Fundamental safety principles. Safety fundamentals

    International Nuclear Information System (INIS)

    2007-01-01

    This publication states the fundamental safety objective and ten associated safety principles, and briefly describes their intent and purpose. The fundamental safety objective - to protect people and the environment from harmful effects of ionizing radiation - applies to all circumstances that give rise to radiation risks. The safety principles are applicable, as relevant, throughout the entire lifetime of all facilities and activities - existing and new - utilized for peaceful purposes, and to protective actions to reduce existing radiation risks. They provide the basis for requirements and measures for the protection of people and the environment against radiation risks and for the safety of facilities and activities that give rise to radiation risks, including, in particular, nuclear installations and uses of radiation and radioactive sources, the transport of radioactive material and the management of radioactive waste

  4. TU-FG-209-05: Demonstration of the Line Focus Principle Using the Generalized Measured-Relative Object Detectability (GM-ROD) Metric

    Energy Technology Data Exchange (ETDEWEB)

    Russ, M; Shankar, A; Lau, A; Bednarek, D; Rudin, S [University at Buffalo (SUNY), Buffalo, NY (United States)

    2016-06-15

    Purpose: Demonstrate and quantify the augmented resolution due to focalspot size decrease in images acquired on the anode side of the field, for both small and medium (0.3 and 0.6mm) focal-spot sizes using the experimental task-based GM-ROD metric. Theoretical calculations have shown that a medium focal-spot can achieve the resolution of a small focal-spot if acquired with a tilted anode, effectively providing a higher-output small focal-spot. Methods: The MAF-CMOS (micro-angiographic fluoroscopic complementary-metal-oxide semiconductor) detector (75µm pixel pitch) imaged two copper wire segments of different diameter and a pipeline stent at the central axis and on the anode side of the beam, achieved by tilting the x-ray C-arm (Toshiba Infinix) to 6° and realigning the detector with the perpendicular ray to correct for x-ray obliquity. The relative gain in resolution was determined using the GM-ROD metric, which compares images on the basis of the Fourier transform of the image and the measured NNPS. To emphasize the geometric unsharpness, images were acquired at a magnification of two. Results: Images acquired on the anode side were compared to those acquired on the central axis with the same target-area focal-spot to consider the effect of an angled tube, and for all three objects the advantage of the smaller effective focal-spot was clear, showing a maximum improvement of 36% in GM-ROD. The images obtained with the small focal-spot at the central axis were compared to those of the medium focal-spot at the anode side and, for all objects, the relative performance was comparable. Conclusion: For three objects, the GM-ROD demonstrated the advantage of the anode side focal-spot. The comparable performance of the medium focal-spot on the anode side will allow for a high-output small focal-spot; a necessity in endovascular image-guided interventions. Partial support from an NIH grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.

  5. TU-FG-209-05: Demonstration of the Line Focus Principle Using the Generalized Measured-Relative Object Detectability (GM-ROD) Metric

    International Nuclear Information System (INIS)

    Russ, M; Shankar, A; Lau, A; Bednarek, D; Rudin, S

    2016-01-01

    Purpose: Demonstrate and quantify the augmented resolution due to focalspot size decrease in images acquired on the anode side of the field, for both small and medium (0.3 and 0.6mm) focal-spot sizes using the experimental task-based GM-ROD metric. Theoretical calculations have shown that a medium focal-spot can achieve the resolution of a small focal-spot if acquired with a tilted anode, effectively providing a higher-output small focal-spot. Methods: The MAF-CMOS (micro-angiographic fluoroscopic complementary-metal-oxide semiconductor) detector (75µm pixel pitch) imaged two copper wire segments of different diameter and a pipeline stent at the central axis and on the anode side of the beam, achieved by tilting the x-ray C-arm (Toshiba Infinix) to 6° and realigning the detector with the perpendicular ray to correct for x-ray obliquity. The relative gain in resolution was determined using the GM-ROD metric, which compares images on the basis of the Fourier transform of the image and the measured NNPS. To emphasize the geometric unsharpness, images were acquired at a magnification of two. Results: Images acquired on the anode side were compared to those acquired on the central axis with the same target-area focal-spot to consider the effect of an angled tube, and for all three objects the advantage of the smaller effective focal-spot was clear, showing a maximum improvement of 36% in GM-ROD. The images obtained with the small focal-spot at the central axis were compared to those of the medium focal-spot at the anode side and, for all objects, the relative performance was comparable. Conclusion: For three objects, the GM-ROD demonstrated the advantage of the anode side focal-spot. The comparable performance of the medium focal-spot on the anode side will allow for a high-output small focal-spot; a necessity in endovascular image-guided interventions. Partial support from an NIH grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.

  6. Observer-Based Fuel Control Using Oxygen Measurement. A study based on a first-principles model of a pulverized coal fired Benson Boiler

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, Palle; Bendtsen, Jan Dimon; Mortensen, Jan Henrik; Just Nielsen, Rene; Soendergaard Pedersen, Tom [Aalborg Univ. (Denmark). Dept. of Control Engineering

    2005-01-01

    This report describes an attempt to improve the existing control of coal mills used at the Danish power plant Nordjyllandsvaerket Unit 3. The coal mills pulverize raw coal to a fine-grained powder, which is injected into the furnace of the power plant. In the furnace the coal is combusted, producing heat, which is used for steam production. With better control of the coal mills, the power plant can be controlled more efficiently during load changes, thus improving the overall availability and efficiency of the plant. One of the main difficulties from a control point of view is that the coal mills are not equipped with sensors that detect how much coal is injected into the furnace. During the project, a fairly detailed, non-linear differential equation model of the furnace and the steam circuit was constructed and validated against data obtained at the plant. It was observed that this model was able to capture most of the important dynamics found in the data. Based on this model, it is possible to extract linearized models in various operating points. The report discusses this approach and illustrates how the model can be linearized and reduced to a lower-order linear model that is valid in the vicinity of an operating point by removing states that have little influence on the overall response. A viable adaptive control strategy would then be to design controllers for each of these simplified linear models, i.e., the control loop that sets references to the coal mills and feedwater, and use the load as a separate input to the control. The control gains should then be scheduled according to the load. However, the variations and uncertainties in the coal mill are not addressed directly in this approach. Another control approach was taken in this project, where a Kalman filter based on measurements of air flow blown into the furnace and the oxygen concentration in the flue gas is designed to estimate the actual coal flow injected into the furnace. With this estimate

  7. Regulating food law : risk analysis and the precautionary principle as general principles of EU food law

    NARCIS (Netherlands)

    Szajkowska, A.

    2012-01-01

    In food law scientific evidence occupies a central position. This study offers a legal insight into risk analysis and the precautionary principle, positioned in the EU as general principles applicable to all food safety measures, both national and EU. It develops a new method of looking at these

  8. A Principle of Intentionality.

    Science.gov (United States)

    Turner, Charles K

    2017-01-01

    The mainstream theories and models of the physical sciences, including neuroscience, are all consistent with the principle of causality. Wholly causal explanations make sense of how things go, but are inherently value-neutral, providing no objective basis for true beliefs being better than false beliefs, nor for it being better to intend wisely than foolishly. Dennett (1987) makes a related point in calling the brain a syntactic (procedure-based) engine. He says that you cannot get to a semantic (meaning-based) engine from there. He suggests that folk psychology revolves around an intentional stance that is independent of the causal theories of the brain, and accounts for constructs such as meanings, agency, true belief, and wise desire. Dennett proposes that the intentional stance is so powerful that it can be developed into a valid intentional theory. This article expands Dennett's model into a principle of intentionality that revolves around the construct of objective wisdom. This principle provides a structure that can account for all mental processes, and for the scientific understanding of objective value. It is suggested that science can develop a far more complete worldview with a combination of the principles of causality and intentionality than would be possible with scientific theories that are consistent with the principle of causality alone.

  9. General principles of radiotherapy

    International Nuclear Information System (INIS)

    Easson, E.C.

    1985-01-01

    The daily practice of any established branch of medicine should be based on some acceptable principles. This chapter is concerned with the general principles on which the radiotherapy of the Manchester school is based. Though many radiotherapists in other centres would doubtless accept these principles, there are sufficiently wide differences in practice throughout the world to suggest that some therapists adhere to a fundamentally different philosophy. The authors believe it is important, especially for those beginning their formal training in radiotherapy, to subscribe to an internally consistent school of thought, employing methods of treatment for each type of lesion in each anatomical site that are based on accepted principles and subjected to continuous rigorous scrutiny to test their effectiveness. Not only must each therapeutic technique be evaluated, but the underlying principles too must be questioned if and when this seems indicated. It is a feature of this hospital that similar lesions are all treated by the same technique, so long as statistical evidence justifies such a policy. All members of the staff adhere to the accepted policy until or unless reliable reasons are adduced to change this policy

  10. The traveltime holographic principle

    Science.gov (United States)

    Huang, Yunsong; Schuster, Gerard T.

    2015-01-01

    Fermat's interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat's interferometric principle. We denote this principle as the `traveltime holographic principle', by analogy with the holographic principle in cosmology where information in a volume is encoded on the region's boundary.

  11. Ethical principles of scientific communication

    Directory of Open Access Journals (Sweden)

    Baranov G. V.

    2017-03-01

    Full Text Available the article presents the principles of ethical management of scientific communication. The author approves the priority of ethical principle of social responsibility of the scientist.

  12. Coherent population transfer and superposition of atomic states via stimulated Raman adiabatic passage using an excited-doublet four-level atom

    International Nuclear Information System (INIS)

    Jin Shiqi; Gong Shangqing; Li Ruxin; Xu Zhizhan

    2004-01-01

    Coherent population transfer and superposition of atomic states via a technique of stimulated Raman adiabatic passage in an excited-doublet four-level atomic system have been analyzed. It is shown that the behavior of adiabatic passage in this system depends crucially on the detunings between the laser frequencies and the corresponding atomic transition frequencies. Particularly, if both the fields are tuned to the center of the two upper levels, the four-level system has two degenerate dark states, although one of them contains the contribution from the excited atomic states. The nonadiabatic coupling of the two degenerate dark states is intrinsic, it originates from the energy difference of the two upper levels. An arbitrary superposition of atomic states can be prepared due to such nonadiabatic coupling effect

  13. Nonlinear superposition of monopoles

    International Nuclear Information System (INIS)

    Forgacs, P.; Horvath, Z.; Palla, L.

    1981-04-01

    With the aid of Baecklund transformations the authors construct exact multimonopole solutions of the axially and mirror-symmetric Bogomolny equations. The explicit form of the length of the Higgs field is given and is studied both analytically and numerically. The energy density for monopoles with charges 2,3,4,5 is also calculated. (author)

  14. An Analysis of Dynamic Instability on TC-Like Vortex Using the Regularization-Based Eigenmode Linear Superposition Method

    Directory of Open Access Journals (Sweden)

    Shuang Liu

    2018-01-01

    Full Text Available In this paper, the eigenmode linear superposition (ELS method based on the regularization is used to discuss the distributions of all eigenmodes and the role of their instability to the intensity and structure change in TC-like vortex. Results show that the regularization approach can overcome the ill-posed problem occurring in solving mode weight coefficients as the ELS method are applied to analyze the impacts of dynamic instability on the intensity and structure change of TC-like vortex. The Generalized Cross-validation (GCV method and the L curve method are used to determine the regularization parameters, and the results of the two approaches are compared. It is found that the results based on the GCV method are closer to the given initial condition in the solution of the inverse problem of the vortex system. Then, the instability characteristic of the hollow vortex as the basic state are examined based on the linear barotropic shallow water equations. It is shown that the wavenumber distribution of system instability obtained from the ELS method is well consistent with that of the numerical analysis based on the norm mode. On the other hand, the evolution of the hollow vortex are discussed using the product of each eigenmode and its corresponding weight coefficient. Results show that the intensity and structure change of the system are mainly affected by the dynamic instability in the early stage of disturbance development, and the most unstable mode has a dominant role in the growth rate and the horizontal distribution of intense disturbance in the near-core region. Moreover, the wave structure of the most unstable mode possesses typical characteristics of mixed vortex Rossby-inertio-gravity waves (VRIGWs.

  15. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    Science.gov (United States)

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  16. North-south asymmetry of solar activity as a superposition of two realizations - the sign and absolute value

    Science.gov (United States)

    Badalyan, O. G.; Obridko, V. N.

    2017-07-01

    Context. Since the occurrence of north-south asymmetry (NSA) of alternating sign may be determined by different mechanisms, the frequency and amplitude characteristics of this phenomenon should be considered separately. Aims: We propose a new approach to the description of the NSA of solar activity. Methods: The asymmetry defined as A = (N-S)/(N + S) (where N and S are, respectively, the indices of activity of the northern and southern hemispheres) is treated as a superposition of two functions: the sign of asymmetry (signature) and its absolute value (modulus). This approach is applied to the analysis of the NSA of sunspot group areas for the period 1874-2013. Results: We show that the sign of asymmetry provides information on the behavior of the asymmetry. In particular, it displays quasi-periodic variation with a period of 12 yr and quasi-biennial oscillations as the asymmetry itself. The statistics of the so-called monochrome intervals (long periods of positive or negative asymmetry) are considered and it is shown that the distribution of these intervals is described by the random distribution law. This means that the dynamo mechanisms governing the cyclic variation of solar activity must involve random processes. At the same time, the asymmetry modulus has completely different statistical properties and is probably associated with processes that determine the amplitude of the cycle. One can reliably isolate an 11-yr cycle in the behavior of the asymmetry absolute value shifted by half a period with respect to the Wolf numbers. It is shown that the asymmetry modulus has a significant prognostic value: the higher the maximum of the asymmetry modulus, the lower the following Wolf number maximum. Conclusions: A fundamental nature of this concept of NSA is discussed in the context of the general methodology of cognizing the world. It is supposed that the proposed description of the NSA will help clarify the nature of this phenomenon.

  17. Ethical principles and theories.

    Science.gov (United States)

    Schultz, R C

    1993-01-01

    Ethical theory about what is right and good in human conduct lies behind the issues practitioners face and the codes they turn to for guidance; it also provides guidance for actions, practices, and policies. Principles of obligation, such as egoism, utilitarianism, and deontology, offer general answers to the question, "Which acts/practices are morally right?" A re-emerging alternative to using such principles to assess individual conduct is to center normative theory on personal virtues. For structuring society's institutions, principles of social justice offer alternative answers to the question, "How should social benefits and burdens be distributed?" But human concerns about right and good call for more than just theoretical responses. Some critics (eg, the postmodernists and the feminists) charge that normative ethical theorizing is a misguided enterprise. However, that charge should be taken as a caution and not as a refutation of normative ethical theorizing.

  18. Principles of musical acoustics

    CERN Document Server

    Hartmann, William M

    2013-01-01

    Principles of Musical Acoustics focuses on the basic principles in the science and technology of music. Musical examples and specific musical instruments demonstrate the principles. The book begins with a study of vibrations and waves, in that order. These topics constitute the basic physical properties of sound, one of two pillars supporting the science of musical acoustics. The second pillar is the human element, the physiological and psychological aspects of acoustical science. The perceptual topics include loudness, pitch, tone color, and localization of sound. With these two pillars in place, it is possible to go in a variety of directions. The book treats in turn, the topics of room acoustics, audio both analog and digital, broadcasting, and speech. It ends with chapters on the traditional musical instruments, organized by family. The mathematical level of this book assumes that the reader is familiar with elementary algebra. Trigonometric functions, logarithms and powers also appear in the book, but co...

  19. The precautionary principle in international environmental law and international jurisprudence

    OpenAIRE

    Tubić, Bojan

    2014-01-01

    This paper analysis international regulation of the precautionary principle as one of environmental principles. This principle envisages that when there are threats of serious and irreparable harm, as a consequence of certain economic activity, the lack of scientific evidence and full certainty cannot be used as a reason for postponing efficient measures for preventing environmental harm. From economic point of view, the application of precautionary principle is problematic, because it create...

  20. Mechanical engineering principles

    CERN Document Server

    Bird, John

    2014-01-01

    A student-friendly introduction to core engineering topicsThis book introduces mechanical principles and technology through examples and applications, enabling students to develop a sound understanding of both engineering principles and their use in practice. These theoretical concepts are supported by 400 fully worked problems, 700 further problems with answers, and 300 multiple-choice questions, all of which add up to give the reader a firm grounding on each topic.The new edition is up to date with the latest BTEC National specifications and can also be used on undergraduate courses in mecha

  1. Principles of Optics

    Science.gov (United States)

    Born, Max; Wolf, Emil

    1999-10-01

    Principles of Optics is one of the classic science books of the twentieth century, and probably the most influential book in optics published in the past forty years. This edition has been thoroughly revised and updated, with new material covering the CAT scan, interference with broad-band light and the so-called Rayleigh-Sommerfeld diffraction theory. This edition also details scattering from inhomogeneous media and presents an account of the principles of diffraction tomography to which Emil Wolf has made a basic contribution. Several new appendices are also included. This new edition will be invaluable to advanced undergraduates, graduate students and researchers working in most areas of optics.

  2. Electrical principles 3 checkbook

    CERN Document Server

    Bird, J O

    2013-01-01

    Electrical Principles 3 Checkbook aims to introduce students to the basic electrical principles needed by technicians in electrical engineering, electronics, and telecommunications.The book first tackles circuit theorems, single-phase series A.C. circuits, and single-phase parallel A.C. circuits. Discussions focus on worked problems on parallel A.C. circuits, worked problems on series A.C. circuits, main points concerned with D.C. circuit analysis, worked problems on circuit theorems, and further problems on circuit theorems. The manuscript then examines three-phase systems and D.C. transients

  3. Principles of statistics

    CERN Document Server

    Bulmer, M G

    1979-01-01

    There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo

  4. Difference Principle and Black-hole Thermodynamics

    OpenAIRE

    Martin, Pete

    2009-01-01

    The heuristic principle that constructive dynamics may arise wherever there exists a difference, or gradient, is discussed. Consideration of black-hole entropy appears to provide a clue for setting a lower bound on any extensive measure of such collective system difference, or potential to give rise to constructive dynamics. It is seen that the second-power dependence of black-hole entropy on mass is consistent with the difference principle, while consideration of Hawking radiation forces one...

  5. Dark matter and the equivalence principle

    Science.gov (United States)

    Frieman, Joshua A.; Gradwohl, Ben-Ami

    1993-01-01

    A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.

  6. Measuring library performance principles and techniques

    CERN Document Server

    Brophy, Peter

    2006-01-01

    Provide an account of thinking and research on the evaluation of library services. Illustrated throughout with examples across the different library sectors, this book is structured to focus on the intended service user, then to look at service management and the building blocks of services, and finally to draw together these strands.

  7. PRINCIPLES AND PROCEDURES ON FISCAL

    Directory of Open Access Journals (Sweden)

    Morar Ioan Dan

    2011-07-01

    Full Text Available Fiscal science advertise in most analytical situations, while the principles reiterated by specialists in the field in various specialized works The two components of taxation, the tax system relating to the theoretical and the practical procedures relating to tax are marked by frequent references and invocations of the underlying principles to tax. This paper attempts a return on equity fiscal general vision as a principle often invoked and used to justify tax policies, but so often violated the laws fiscality . Also want to emphasize the importance of devising procedures to ensure fiscal equitable treatment of taxpayers. Specific approach of this paper is based on the notion that tax equity is based on equality before tax and social policies of the executive that would be more effective than using the other tax instruments. I want to emphasize that if the scientific approach to justify the unequal treatment of the tax law is based on the various social problems of the taxpayers, then deviates from the issue of tax fairness justification explaining the need to promote social policies usually more attractive to taxpayers. Modern tax techniques are believed to be promoted especially in order to ensure an increasing level of high efficiency at the expense of the taxpayers obligations to ensure equality before the law tax. On the other hand, tax inequities reaction generates multiple recipients from the first budget plan, but finalities unfair measures can not quantify and no timeline for the reaction, usually not known. But while statistics show fluctuations in budgetary revenues and often find in literature reviews and analysis relevant to a connection between changes in government policies, budget execution and outcome. The effects of inequality on tax on tax procedures and budgetary revenues are difficult to quantify and is among others to this work. Providing tax equity without combining it with the principles of discrimination and neutrality

  8. Experimental tests of a superposition hypothesis to explain the relationship between the vestibuloocular reflex and smooth pursuit during horizontal combined eye-head tracking in humans

    Science.gov (United States)

    Huebner, W. P.; Leigh, R. J.; Seidman, S. H.; Thomas, C. W.; Billian, C.; DiScenna, A. O.; Dell'Osso, L. F.

    1992-01-01

    1. We used a modeling approach to test the hypothesis that, in humans, the smooth pursuit (SP) system provides the primary signal for cancelling the vestibuloocular reflex (VOR) during combined eye-head tracking (CEHT) of a target moving smoothly in the horizontal plane. Separate models for SP and the VOR were developed. The optimal values of parameters of the two models were calculated using measured responses of four subjects to trials of SP and the visually enhanced VOR. After optimal parameter values were specified, each model generated waveforms that accurately reflected the subjects' responses to SP and vestibular stimuli. The models were then combined into a CEHT model wherein the final eye movement command signal was generated as the linear summation of the signals from the SP and VOR pathways. 2. The SP-VOR superposition hypothesis was tested using two types of CEHT stimuli, both of which involved passive rotation of subjects in a vestibular chair. The first stimulus consisted of a "chair brake" or sudden stop of the subject's head during CEHT; the visual target continued to move. The second stimulus consisted of a sudden change from the visually enhanced VOR to CEHT ("delayed target onset" paradigm); as the vestibular chair rotated past the angular position of the stationary visual stimulus, the latter started to move in synchrony with the chair. Data collected during experiments that employed these stimuli were compared quantitatively with predictions made by the CEHT model. 3. During CEHT, when the chair was suddenly and unexpectedly stopped, the eye promptly began to move in the orbit to track the moving target. Initially, gaze velocity did not completely match target velocity, however; this finally occurred approximately 100 ms after the brake onset. The model did predict the prompt onset of eye-in-orbit motion after the brake, but it did not predict that gaze velocity would initially be only approximately 70% of target velocity. One possible

  9. Electrical contacts principles and applications

    CERN Document Server

    Slade, Paul G

    2013-01-01

    Covering the theory, application, and testing of contact materials, Electrical Contacts: Principles and Applications, Second Edition introduces a thorough discussion on making electric contact and contact interface conduction; presents a general outline of, and measurement techniques for, important corrosion mechanisms; considers the results of contact wear when plug-in connections are made and broken; investigates the effect of thin noble metal plating on electronic connections; and relates crucial considerations for making high- and low-power contact joints. It examines contact use in switch

  10. The Principles of Readability

    Science.gov (United States)

    DuBay, William H.

    2004-01-01

    The principles of readability are in every style manual. Readability formulas are in every writing aid. What is missing is the research and theory on which they stand. This short review of readability research spans 100 years. The first part covers the history of adult literacy studies in the U.S., establishing the stratified nature of the adult…

  11. Principles of electrodynamics

    CERN Document Server

    Schwartz, Melvin

    1972-01-01

    This advanced undergraduate- and graduate-level text by the 1988 Nobel Prize winner establishes the subject's mathematical background, reviews the principles of electrostatics, then introduces Einstein's special theory of relativity and applies it throughout the book in topics ranging from Gauss' theorem and Coulomb's law to electric and magnetic susceptibility.

  12. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  13. The Idiom Principle Revisited

    Science.gov (United States)

    Siyanova-Chanturia, Anna; Martinez, Ron

    2015-01-01

    John Sinclair's Idiom Principle famously posited that most texts are largely composed of multi-word expressions that "constitute single choices" in the mental lexicon. At the time that assertion was made, little actual psycholinguistic evidence existed in support of that holistic, "single choice," view of formulaic language. In…

  14. The Pauli Exclusion Principle

    Indian Academy of Sciences (India)

    his exclusion principle, the quantum theory was a mess. Moreover, it could ... This is a function of all the coordinates and 'internal variables' such as spin, of all the ... must remain basically the same (ie change by a phase factor at most) if we ...

  15. The traveltime holographic principle

    KAUST Repository

    Huang, Y.; Schuster, Gerard T.

    2014-01-01

    Fermat's interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat's interferometric principle. We denote this principle as the ‘traveltime holographic principle’, by analogy with the holographic principle in cosmology where information in a volume is encoded on the region's boundary.

  16. The Bohr Correspondence Principle

    Indian Academy of Sciences (India)

    IAS Admin

    Deepak Dhar. Keywords. Correspondence principle, hy- drogen atom, Kepler orbit. Deepak Dhar works at the. Tata Institute of Funda- mental Research,. Mumbai. His research interests are mainly in the area of statistical physics. We consider the quantum-mechanical non-relati- vistic hydrogen atom. We show that for bound.

  17. Principles of Protocol Design

    DEFF Research Database (Denmark)

    Sharp, Robin

    This is a new and updated edition of a book first published in 1994. The book introduces the reader to the principles used in the construction of a large range of modern data communication protocols, as used in distributed computer systems of all kinds. The approach taken is rather a formal one...

  18. The traveltime holographic principle

    KAUST Repository

    Huang, Y.

    2014-11-06

    Fermat\\'s interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat\\'s interferometric principle. We denote this principle as the ‘traveltime holographic principle’, by analogy with the holographic principle in cosmology where information in a volume is encoded on the region\\'s boundary.

  19. Fermat's Principle Revisited.

    Science.gov (United States)

    Kamat, R. V.

    1991-01-01

    A principle is presented to show that, if the time of passage of light is expressible as a function of discrete variables, one may dispense with the more general method of the calculus of variations. The calculus of variations and the alternative are described. The phenomenon of mirage is discussed. (Author/KR)

  20. Principles of economics textbooks

    DEFF Research Database (Denmark)

    Madsen, Poul Thøis

    2012-01-01

    Has the financial crisis already changed US principles of economics textbooks? Rather little has changed in individual textbooks, but taken as a whole ten of the best-selling textbooks suggest rather encompassing changes of core curriculum. A critical analysis of these changes shows how individual...

  1. Measurement of bone mineral density in the tunnel regions for anterior cruciate ligament reconstruction by dual-energy X-ray absorptiometry, computed tomography scan, and the immersion technique based on Archimedes' principle.

    Science.gov (United States)

    Tie, Kai; Wang, Hua; Wang, Xin; Chen, Liaobin

    2012-10-01

    To determine, for anterior cruciate ligament (ACL) reconstruction, whether the bone mineral density (BMD) of the femoral tunnel was higher than that of the tibial tunnel, to provide objective evidence for choosing the appropriate diameter of interference screws. Two groups were enrolled. One group comprised 30 normal volunteers, and the other comprised 9 patients with ACL rupture. Dual-energy X-ray absorptiometry was used to measure the BMD of the femoral and tibial tunnel regions of the volunteers' right knees by choosing a circular area covering the screw fixation region. The knees were also scanned by spiral computed tomography (CT), and the 3-dimensional reconstruction technique was used to determine the circular sections passing through the longitudinal axis of the femoral and tibial tunnels. Grayscale CT values of the cross-sectional area were measured. Cylindrical cancellous bone blocks were removed from the femoral and tibial tunnels during the ACL reconstruction for the patients. The volumetric BMD of the bone blocks was measured using a standardized immersion technique according to Archimedes' principle. As measured by dual-energy X-ray absorptiometry, the BMD of the femoral and tibial tunnel regions was 1.162 ± 0.034 g/cm(2) and 0.814 ± 0.038 g/cm(2), respectively (P difference in both femoral and tibial tunnel regions. For ACL reconstruction, the BMD of the femoral tunnel is higher than that of the tibial tunnel. This implies that a proportionally larger-diameter interference screw should be used for fixation in the proximal tibia than that used for fixation in the distal femur. Level IV, therapeutic case series. Copyright © 2012 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  2. Reciprocity principle for scattered fields from discontinuities in waveguides.

    Science.gov (United States)

    Pau, Annamaria; Capecchi, Danilo; Vestroni, Fabrizio

    2015-01-01

    This study investigates the scattering of guided waves from a discontinuity exploiting the principle of reciprocity in elastodynamics, written in a form that applies to waveguides. The coefficients of reflection and transmission for an arbitrary mode can be derived as long as the principle of reciprocity is satisfied at the discontinuity. Two elastodynamic states are related by the reciprocity. One is the response of the waveguide in the presence of the discontinuity, with the scattered fields expressed as a superposition of wave modes. The other state is the response of the waveguide in the absence of the discontinuity oscillating according to an arbitrary mode. The semi-analytical finite element method is applied to derive the needed dispersion relation and wave mode shapes. An application to a solid cylinder with a symmetric double change of cross-section is presented. This model is assumed to be representative of a damaged rod. The coefficients of reflection and transmission of longitudinal waves are investigated for selected values of notch length and varying depth. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. The action uncertainty principle and quantum gravity

    Science.gov (United States)

    Mensky, Michael B.

    1992-02-01

    Results of the path-integral approach to the quantum theory of continuous measurements have been formulated in a preceding paper in the form of an inequality of the type of the uncertainty principle. The new inequality was called the action uncertainty principle, AUP. It was shown that the AUP allows one to find in a simple what outputs of the continuous measurements will occur with high probability. Here a more simple form of the AUP will be formulated, δ S≳ħ. When applied to quantum gravity, it leads in a very simple way to the Rosenfeld inequality for measurability of the average curvature.

  4. Is the Precautionary Principle Really Incoherent?

    Science.gov (United States)

    Boyer-Kassem, Thomas

    2017-11-01

    The Precautionary Principle has been an increasingly important principle in international treaties since the 1980s. Through varying formulations, it states that when an activity can lead to a catastrophe for human health or the environment, measures should be taken to prevent it even if the cause-and-effect relationship is not fully established scientifically. The Precautionary Principle has been critically discussed from many sides. This article concentrates on a theoretical argument by Peterson (2006) according to which the Precautionary Principle is incoherent with other desiderata of rational decision making, and thus cannot be used as a decision rule that selects an action among several ones. I claim here that Peterson's argument fails to establish the incoherence of the Precautionary Principle, by attacking three of its premises. I argue (i) that Peterson's treatment of uncertainties lacks generality, (ii) that his Archimedian condition is problematic for incommensurability reasons, and (iii) that his explication of the Precautionary Principle is not adequate. This leads me to conjecture that the Precautionary Principle can be envisaged as a coherent decision rule, again. © 2017 Society for Risk Analysis.

  5. Extremum principles for irreversible processes

    International Nuclear Information System (INIS)

    Hillert, M.; Agren, J.

    2006-01-01

    Hamilton's extremum principle is a powerful mathematical tool in classical mechanics. Onsager's extremum principle may play a similar role in irreversible thermodynamics and may also become a valuable tool. His principle may formally be regarded as a principle of maximum rate of entropy production but does not have a clear physical interpretation. Prigogine's principle of minimum rate of entropy production has a physical interpretation when it applies, but is not strictly valid except for a very special case

  6. Principles of development of the industry of technogenic waste processing

    Directory of Open Access Journals (Sweden)

    Maria A. Bayeva

    2014-01-01

    Full Text Available Objective to identify and substantiate the principles of development of the industry of technogenic waste processing. Methods systemic analysis and synthesis method of analogy. Results basing on the analysis of the Russian and foreign experience in the field of waste management and environmental protection the basic principles of development activities on technogenic waste processing are formulated the principle of legal regulation the principle of efficiency technologies the principle of ecological safety the principle of economic support. The importance of each principle is substantiated by the description of the situation in this area identifying the main problems and ways of their solution. Scientific novelty the fundamental principles of development of the industry of the industrial wastes processing are revealed the measures of state support are proposed. Practical value the presented theoretical conclusions and proposals are aimed primarily on theoretical and methodological substantiation and practical solutions to modern problems in the sphere of development of the industry of technogenic waste processing.

  7. Principles of geodynamics

    CERN Document Server

    Scheidegger, Adrian E

    1982-01-01

    Geodynamics is commonly thought to be one of the subjects which provide the basis for understanding the origin of the visible surface features of the Earth: the latter are usually assumed as having been built up by geodynamic forces originating inside the Earth ("endogenetic" processes) and then as having been degrad­ ed by geomorphological agents originating in the atmosphere and ocean ("exogenetic" agents). The modem view holds that the sequence of events is not as neat as it was once thought to be, and that, in effect, both geodynamic and geomorphological processes act simultaneously ("Principle of Antagonism"); however, the division of theoretical geology into the principles of geodynamics and those of theoretical geomorphology seems to be useful for didactic purposes. It has therefore been maintained in the present writer's works. This present treatise on geodynamics is the first part of the author's treatment of theoretical geology, the treatise on Theoretical Geomorphology (also published by the Sprin...

  8. Principles of systems science

    CERN Document Server

    Mobus, George E

    2015-01-01

    This pioneering text provides a comprehensive introduction to systems structure, function, and modeling as applied in all fields of science and engineering. Systems understanding is increasingly recognized as a key to a more holistic education and greater problem solving skills, and is also reflected in the trend toward interdisciplinary approaches to research on complex phenomena. The subject of systems science, as a basis for understanding the components and drivers of phenomena at all scales, should be viewed with the same importance as a traditional liberal arts education. Principles of Systems Science contains many graphs, illustrations, side bars, examples, and problems to enhance understanding. From basic principles of organization, complexity, abstract representations, and behavior (dynamics) to deeper aspects such as the relations between information, knowledge, computation, and system control, to higher order aspects such as auto-organization, emergence and evolution, the book provides an integrated...

  9. Common principles and multiculturalism.

    Science.gov (United States)

    Zahedi, Farzaneh; Larijani, Bagher

    2009-01-01

    Judgment on rightness and wrongness of beliefs and behaviors is a main issue in bioethics. Over centuries, big philosophers and ethicists have been discussing the suitable tools to determine which act is morally sound and which one is not. Emerging the contemporary bioethics in the West has resulted in a misconception that absolute westernized principles would be appropriate tools for ethical decision making in different cultures. We will discuss this issue by introducing a clinical case. Considering various cultural beliefs around the world, though it is not logical to consider all of them ethically acceptable, we can gather on some general fundamental principles instead of going to the extremes of relativism and absolutism. Islamic teachings, according to the presented evidence in this paper, fall in with this idea.

  10. Principles of Mobile Communication

    CERN Document Server

    Stüber, Gordon L

    2012-01-01

    This mathematically rigorous overview of physical layer wireless communications is now in a third, fully revised and updated edition. Along with coverage of basic principles sufficient for novice students, the volume includes plenty of finer details that will satisfy the requirements of graduate students aiming to research the topic in depth. It also has a role as a handy reference for wireless engineers. The content stresses core principles that are applicable to a broad range of wireless standards. Beginning with a survey of the field that introduces an array of issues relevant to wireless communications and which traces the historical development of today’s accepted wireless standards, the book moves on to cover all the relevant discrete subjects, from radio propagation to error probability performance and cellular radio resource management. A valuable appendix provides a succinct and focused tutorial on probability and random processes, concepts widely used throughout the book. This new edition, revised...

  11. Principles of mathematical modeling

    CERN Document Server

    Dym, Clive

    2004-01-01

    Science and engineering students depend heavily on concepts of mathematical modeling. In an age where almost everything is done on a computer, author Clive Dym believes that students need to understand and "own" the underlying mathematics that computers are doing on their behalf. His goal for Principles of Mathematical Modeling, Second Edition, is to engage the student reader in developing a foundational understanding of the subject that will serve them well into their careers. The first half of the book begins with a clearly defined set of modeling principles, and then introduces a set of foundational tools including dimensional analysis, scaling techniques, and approximation and validation techniques. The second half demonstrates the latest applications for these tools to a broad variety of subjects, including exponential growth and decay in fields ranging from biology to economics, traffic flow, free and forced vibration of mechanical and other systems, and optimization problems in biology, structures, an...

  12. Principles of Stellar Interferometry

    CERN Document Server

    Glindemann, Andreas

    2011-01-01

    Over the last decade, stellar interferometry has developed from a specialist tool to a mainstream observing technique, attracting scientists whose research benefits from milliarcsecond angular resolution. Stellar interferometry has become part of the astronomer’s toolbox, complementing single-telescope observations by providing unique capabilities that will advance astronomical research. This carefully written book is intended to provide a solid understanding of the principles of stellar interferometry to students starting an astronomical research project in this field or to develop instruments and to astronomers using interferometry but who are not interferometrists per se. Illustrated by excellent drawings and calculated graphs the imaging process in stellar interferometers is explained starting from first principles on light propagation and diffraction wave propagation through turbulence is described in detail using Kolmogorov statistics the impact of turbulence on the imaging process is discussed both f...

  13. Principles of Fourier analysis

    CERN Document Server

    Howell, Kenneth B

    2001-01-01

    Fourier analysis is one of the most useful and widely employed sets of tools for the engineer, the scientist, and the applied mathematician. As such, students and practitioners in these disciplines need a practical and mathematically solid introduction to its principles. They need straightforward verifications of its results and formulas, and they need clear indications of the limitations of those results and formulas.Principles of Fourier Analysis furnishes all this and more. It provides a comprehensive overview of the mathematical theory of Fourier analysis, including the development of Fourier series, "classical" Fourier transforms, generalized Fourier transforms and analysis, and the discrete theory. Much of the author''s development is strikingly different from typical presentations. His approach to defining the classical Fourier transform results in a much cleaner, more coherent theory that leads naturally to a starting point for the generalized theory. He also introduces a new generalized theory based ...

  14. Principles of mobile communication

    CERN Document Server

    Stüber, Gordon L

    2017-01-01

    This mathematically rigorous overview of physical layer wireless communications is now in a 4th, fully revised and updated edition. The new edition features new content on 4G cellular systems, 5G cellular outlook, bandpass signals and systems, and polarization, among many other topics, in addition to a new chapters on channel assignment techniques. Along with coverage of fundamentals and basic principles sufficient for novice students, the volume includes finer details that satisfy the requirements of graduate students aiming to conduct in-depth research. The book begins with a survey of the field, introducing issues relevant to wireless communications. The book moves on to cover relevant discrete subjects, from radio propagation, to error probability performance, and cellular radio resource management. An appendix provides a tutorial on probability and random processes. The content stresses core principles that are applicable to a broad range of wireless standards. New examples are provided throughout the bo...

  15. Principles of photonics

    CERN Document Server

    Liu, Jia-Ming

    2016-01-01

    With this self-contained and comprehensive text, students will gain a detailed understanding of the fundamental concepts and major principles of photonics. Assuming only a basic background in optics, readers are guided through key topics such as the nature of optical fields, the properties of optical materials, and the principles of major photonic functions regarding the generation, propagation, coupling, interference, amplification, modulation, and detection of optical waves or signals. Numerous examples and problems are provided throughout to enhance understanding, and a solutions manual containing detailed solutions and explanations is available online for instructors. This is the ideal resource for electrical engineering and physics undergraduates taking introductory, single-semester or single-quarter courses in photonics, providing them with the knowledge and skills needed to progress to more advanced courses on photonic devices, systems and applications.

  16. Common Principles and Multiculturalism

    Science.gov (United States)

    Zahedi, Farzaneh; Larijani, Bagher

    2009-01-01

    Judgment on rightness and wrongness of beliefs and behaviors is a main issue in bioethics. Over centuries, big philosophers and ethicists have been discussing the suitable tools to determine which act is morally sound and which one is not. Emerging the contemporary bioethics in the West has resulted in a misconception that absolute westernized principles would be appropriate tools for ethical decision making in different cultures. We will discuss this issue by introducing a clinical case. Considering various cultural beliefs around the world, though it is not logical to consider all of them ethically acceptable, we can gather on some general fundamental principles instead of going to the extremes of relativism and absolutism. Islamic teachings, according to the presented evidence in this paper, fall in with this idea. PMID:23908720

  17. Principles of (Behavioral) Economics

    OpenAIRE

    David Laibson; John A. List

    2015-01-01

    Behavioral economics has become an important and integrated component of modern economics. Behavioral economists embrace the core principles of economics—optimization and equilibrium—and seek to develop and extend those ideas to make them more empirically accurate. Behavioral models assume that economic actors try to pick the best feasible option and those actors sometimes make mistakes. Behavioral ideas should be incorporated throughout the first-year undergraduate course. Instructors should...

  18. Principles of electrical safety

    CERN Document Server

    Sutherland, Peter E

    2015-01-01

    Principles of Electrical Safety discusses current issues in electrical safety, which are accompanied by series' of practical applications that can be used by practicing professionals, graduate students, and researchers. .  Provides extensive introductions to important topics in electrical safety Comprehensive overview of inductance, resistance, and capacitance as applied to the human body Serves as a preparatory guide for today's practicing engineers

  19. The uncertainty principle

    International Nuclear Information System (INIS)

    Martens, Hans.

    1991-01-01

    The subject of this thesis is the uncertainty principle (UP). The UP is one of the most characteristic points of differences between quantum and classical mechanics. The starting point of this thesis is the work of Niels Bohr. Besides the discussion the work is also analyzed. For the discussion of the different aspects of the UP the formalism of Davies and Ludwig is used instead of the more commonly used formalism of Neumann and Dirac. (author). 214 refs.; 23 figs

  20. PREFERENCE, PRINCIPLE AND PRACTICE

    DEFF Research Database (Denmark)

    Skovsgaard, Morten; Bro, Peter

    2011-01-01

    Legitimacy has become a central issue in journalism, since the understanding of what journalism is and who journalists are has been challenged by developments both within and outside the newsrooms. Nonetheless, little scholarly work has been conducted to aid conceptual clarification as to how jou...... distinct, but interconnected categories*preference, principle, and practice. Through this framework, historical attempts to justify journalism and journalists are described and discussed in the light of the present challenges for the profession....

  1. Advertisement without Ethical Principles?

    OpenAIRE

    Wojciech Słomski

    2007-01-01

    The article replies to the question, whether the advertisement can exist without ethical principles or ethics should be the basis of the advertisement. One can say that the ethical opinion of the advertisement does not depend on content and the form of advertising content exclusively, but also on recipientís consciousness. The advertisement appeals to the emotions more than to the intellect, thus restricting the area of conscious and based on rational premises choice, so it is morally bad. It...

  2. General Principles Governing Liability

    International Nuclear Information System (INIS)

    Reyners, P.

    1998-01-01

    This paper contains a brief review of the basic principles which govern the special regime of liability and compensation for nuclear damage originating on nuclear installations, in particular the strict and exclusive liability of the nuclear operator, the provision of a financial security to cover this liability and the limits applicable both in amount and in time. The paper also reviews the most important international agreements currently in force which constitute the foundation of this special regime. (author)

  3. The Principle of Proportionality

    DEFF Research Database (Denmark)

    Bennedsen, Morten; Meisner Nielsen, Kasper

    2005-01-01

    Recent policy initiatives within the harmonization of European company laws have promoted a so-called "principle of proportionality" through proposals that regulate mechanisms opposing a proportional distribution of ownership and control. We scrutinize the foundation for these initiatives...... in relationship to the process of harmonization of the European capital markets.JEL classifications: G30, G32, G34 and G38Keywords: Ownership Structure, Dual Class Shares, Pyramids, EU companylaws....

  4. Common Principles and Multiculturalism

    OpenAIRE

    Zahedi, Farzaneh; Larijani, Bagher

    2009-01-01

    Judgment on rightness and wrongness of beliefs and behaviors is a main issue in bioethics. Over centuries, big philosophers and ethicists have been discussing the suitable tools to determine which act is morally sound and which one is not. Emerging the contemporary bioethics in the West has resulted in a misconception that absolute westernized principles would be appropriate tools for ethical decision making in different cultures. We will discuss this issue by introducing a clinical case. Con...

  5. The Maquet principle

    International Nuclear Information System (INIS)

    Levine, R.B.; Stassi, J.; Karasick, D.

    1985-01-01

    Anterior displacement of the tibial tubercle is a well-accepted orthopedic procedure in the treatment of certain patellofemoral disorders. The radiologic appearance of surgical procedures utilizing the Maquet principle has not been described in the radiologic literature. Familiarity with the physiologic and biochemical basis for the procedure and its postoperative appearance is necessary for appropriate roentgenographic evaluation and the radiographic recognition of complications. (orig.)

  6. Principles of lake sedimentology

    International Nuclear Information System (INIS)

    Janasson, L.

    1983-01-01

    This book presents a comprehensive outline on the basic sedimentological principles for lakes, and focuses on environmental aspects and matters related to lake management and control-on lake ecology rather than lake geology. This is a guide for those who plan, perform and evaluate lake sedimentological investigations. Contents abridged: Lake types and sediment types. Sedimentation in lakes and water dynamics. Lake bottom dynamics. Sediment dynamics and sediment age. Sediments in aquatic pollution control programmes. Subject index

  7. Principles of artificial intelligence

    CERN Document Server

    Nilsson, Nils J

    1980-01-01

    A classic introduction to artificial intelligence intended to bridge the gap between theory and practice, Principles of Artificial Intelligence describes fundamental AI ideas that underlie applications such as natural language processing, automatic programming, robotics, machine vision, automatic theorem proving, and intelligent data retrieval. Rather than focusing on the subject matter of the applications, the book is organized around general computational concepts involving the kinds of data structures used, the types of operations performed on the data structures, and the properties of th

  8. Economic uncertainty principle?

    OpenAIRE

    Alexander Harin

    2006-01-01

    The economic principle of (hidden) uncertainty is presented. New probability formulas are offered. Examples of solutions of three types of fundamental problems are reviewed.; Principe d'incertitude économique? Le principe économique d'incertitude (cachée) est présenté. De nouvelles formules de chances sont offertes. Les exemples de solutions des trois types de problèmes fondamentaux sont reconsidérés.

  9. Recent status of numerical simulation studies for zeolites as highly-selective cesium adsorbents by first-principles calculation and Monte Carlo method

    International Nuclear Information System (INIS)

    Nakamura, Hiroki; Okumura, Masahiko; Machida, Masahiko

    2015-01-01

    The authors examined, based on first-principles calculation, the mechanism of mordenite as a species of zeolite to show high adsorption selectivity for Cs, with a focus on the pores as adsorption site. For increasing the adsorption selectivity for Cs, the following three conditions for mordenite were proposed: (1) to have many pores with a radius of about 3 Å, (2) relatively small ratio of Al and Si, and (3) uniform distribution of Al atoms around the pores to adsorb Cs. The superposition effect of the interaction obtained by embracing positive ions with all the pores was revealed to be important, which verified the importance of computational science. It was also successfully conducted to reproduce with Monte Carlo method the thermodynamic level data of ion exchange isotherms, which became engineering metrics after actual measurement. This method was able to reproduce the difference in properties shown by different zeolites, and also able to explain changes in the adsorption performance that depends on Al and Si ratio, which remained the findings from experience up to date, by utilizing the method to associate the result to microscopic factors. Based on these results, this paper discusses how far material development would be realized depending on the leadership of computational science, and what kinds of research and development would be required in the future. (A.O)

  10. Principled Missing Data Treatments.

    Science.gov (United States)

    Lang, Kyle M; Little, Todd D

    2018-04-01

    We review a number of issues regarding missing data treatments for intervention and prevention researchers. Many of the common missing data practices in prevention research are still, unfortunately, ill-advised (e.g., use of listwise and pairwise deletion, insufficient use of auxiliary variables). Our goal is to promote better practice in the handling of missing data. We review the current state of missing data methodology and recent missing data reporting in prevention research. We describe antiquated, ad hoc missing data treatments and discuss their limitations. We discuss two modern, principled missing data treatments: multiple imputation and full information maximum likelihood, and we offer practical tips on how to best employ these methods in prevention research. The principled missing data treatments that we discuss are couched in terms of how they improve causal and statistical inference in the prevention sciences. Our recommendations are firmly grounded in missing data theory and well-validated statistical principles for handling the missing data issues that are ubiquitous in biosocial and prevention research. We augment our broad survey of missing data analysis with references to more exhaustive resources.

  11. Quantum retrodiction and causality principle

    International Nuclear Information System (INIS)

    Shirokov, M.I.

    1994-01-01

    Quantum mechanics is factually a predictive science. But quantum retrodiction may also be needed, e.g., for the experimental verification of the validity of the Schroedinger equation for the wave function in the past if the present state is given. It is shown that in the retrodictive analog of the prediction the measurement must be replaced by another physical process called the retromeasurement. In this process, the reduction of a state vector into eigenvectors of a measured observable must proceed in the opposite direction of time as compared to the usual reduction. Examples of such processes are unknown. Moreover, they are shown to be forbidden by the causality principle stating that the later event cannot influence the earlier one. So quantum retrodiction seems to be unrealizable. It is demonstrated that the approach to the retrodiction given by S.Watanabe and F.Belinfante must be considered as an unsatisfactory ersatz of retrodicting. 20 refs., 3 figs

  12. Neural Network Molecule: a Solution of the Inverse Biometry Problem through Software Support of Quantum Superposition on Outputs of the Network of Artificial Neurons

    Directory of Open Access Journals (Sweden)

    Vladimir I. Volchikhin

    2017-12-01

    Full Text Available Introduction: The aim of the study is to accelerate the solution of neural network biometrics inverse problem on an ordinary desktop computer. Materials and Methods: To speed up the calculations, the artificial neural network is introduced into the dynamic mode of “jittering” of the states of all 256 output bits. At the same time, too many output states of the neural network are logarithmically folded by transitioning to the Hamming distance space between the code of the image “Own” and the codes of the images “Alien”. From the database of images of “Alien” 2.5 % of the most similar images are selected. In the next generation, 97.5 % of the discarded images are restored with GOST R 52633.2-2010 procedures by crossing parent images and obtaining descendant images from them. Results: Over a period of about 10 minutes, 60 generations of directed search for the solution of the inverse problem can be realized that allows inversing matrices of neural network functionals of dimension 416 inputs to 256 outputs with restoration of up to 97 % information on unknown biometric parameters of the image “Own”. Discussion and Conclusions: Supporting for 10 minutes of computer time the 256 qubit quantum superposition allows on a conventional computer to bypass the actual infinity of analyzed states in 5050 (50 to 50 times more than the same computer could process realizing the usual calculations. The increase in the length of the supported quantum superposition by 40 qubits is equivalent to increasing the processor clock speed by about a billion times. It is for this reason that it is more profitable to increase the number of quantum superpositions supported by the software emulator in comparison with the creation of a more powerful processor.

  13. Metaphysics of the principle of least action

    Science.gov (United States)

    Terekhovich, Vladislav

    2018-05-01

    Despite the importance of the variational principles of physics, there have been relatively few attempts to consider them for a realistic framework. In addition to the old teleological question, this paper continues the recent discussion regarding the modal involvement of the principle of least action and its relations with the Humean view of the laws of nature. The reality of possible paths in the principle of least action is examined from the perspectives of the contemporary metaphysics of modality and Leibniz's concept of essences or possibles striving for existence. I elaborate a modal interpretation of the principle of least action that replaces a classical representation of a system's motion along a single history in the actual modality by simultaneous motions along an infinite set of all possible histories in the possible modality. This model is based on an intuition that deep ontological connections exist between the possible paths in the principle of least action and possible quantum histories in the Feynman path integral. I interpret the action as a physical measure of the essence of every possible history. Therefore only one actual history has the highest degree of the essence and minimal action. To address the issue of necessity, I assume that the principle of least action has a general physical necessity and lies between the laws of motion with a limited physical necessity and certain laws with a metaphysical necessity.

  14. Adaptive phase measurements in linear optical quantum computation

    International Nuclear Information System (INIS)

    Ralph, T C; Lund, A P; Wiseman, H M

    2005-01-01

    Photon counting induces an effective non-linear optical phase shift in certain states derived by linear optics from single photons. Although this non-linearity is non-deterministic, it is sufficient in principle to allow scalable linear optics quantum computation (LOQC). The most obvious way to encode a qubit optically is as a superposition of the vacuum and a single photon in one mode-so-called 'single-rail' logic. Until now this approach was thought to be prohibitively expensive (in resources) compared to 'dual-rail' logic where a qubit is stored by a photon across two modes. Here we attack this problem with real-time feedback control, which can realize a quantum-limited phase measurement on a single mode, as has been recently demonstrated experimentally. We show that with this added measurement resource, the resource requirements for single-rail LOQC are not substantially different from those of dual-rail LOQC. In particular, with adaptive phase measurements an arbitrary qubit state α vertical bar 0>+β vertical bar 1> can be prepared deterministically

  15. Efficiency principles of consulting entrepreneurship

    OpenAIRE

    Moroz Yustina S.; Drozdov Igor N.

    2015-01-01

    The article reviews the primary goals and problems of consulting entrepreneurship. The principles defining efficiency of entrepreneurship in the field of consulting are generalized. The special attention is given to the importance of ethical principles of conducting consulting entrepreneurship activity.

  16. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  17. The Playtime Principle

    DEFF Research Database (Denmark)

    Sifa, Rafet; Bauckhage, Christian; Drachen, Anders

    2014-01-01

    be derived from this large-scale analysis, notably that playtime as a function of time, across the thousands of games in the dataset, and irrespective of local differences in the playtime frequency distribution, can be modeled using the same model: the Wei bull distribution. This suggests...... that there are fundamental properties governing player engagement as it evolves over time, which we here refer to as the Playtime Principle. Additionally, the analysis shows that there are distinct clusters, or archetypes, in the playtime frequency distributions of the investigated games. These archetypal groups correspond...

  18. Complex Correspondence Principle

    International Nuclear Information System (INIS)

    Bender, Carl M.; Meisinger, Peter N.; Hook, Daniel W.; Wang Qinghai

    2010-01-01

    Quantum mechanics and classical mechanics are distinctly different theories, but the correspondence principle states that quantum particles behave classically in the limit of high quantum number. In recent years much research has been done on extending both quantum and classical mechanics into the complex domain. These complex extensions continue to exhibit a correspondence, and this correspondence becomes more pronounced in the complex domain. The association between complex quantum mechanics and complex classical mechanics is subtle and demonstrating this relationship requires the use of asymptotics beyond all orders.

  19. Principles of chemical kinetics

    CERN Document Server

    House, James E

    2007-01-01

    James House's revised Principles of Chemical Kinetics provides a clear and logical description of chemical kinetics in a manner unlike any other book of its kind. Clearly written with detailed derivations, the text allows students to move rapidly from theoretical concepts of rates of reaction to concrete applications. Unlike other texts, House presents a balanced treatment of kinetic reactions in gas, solution, and solid states. The entire text has been revised and includes many new sections and an additional chapter on applications of kinetics. The topics covered include quantitative rela

  20. RFID design principles

    CERN Document Server

    Lehpamer, Harvey

    2012-01-01

    This revised edition of the Artech House bestseller, RFID Design Principles, serves as an up-to-date and comprehensive introduction to the subject. The second edition features numerous updates and brand new and expanded material on emerging topics such as the medical applications of RFID and new ethical challenges in the field. This practical book offers you a detailed understanding of RFID design essentials, key applications, and important management issues. The book explores the role of RFID technology in supply chain management, intelligent building design, transportation systems, military