WorldWideScience

Sample records for maximally general correct

  1. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  2. Maximally Localized States and Quantum Corrections of Black Hole Thermodynamics in the Framework of a New Generalized Uncertainty Principle

    International Nuclear Information System (INIS)

    Zhang, Shao-Jun; Miao, Yan-Gang; Zhao, Ying-Jie

    2015-01-01

    As a generalized uncertainty principle (GUP) leads to the effects of the minimal length of the order of the Planck scale and UV/IR mixing, some significant physical concepts and quantities are modified or corrected correspondingly. On the one hand, we derive the maximally localized states—the physical states displaying the minimal length uncertainty associated with a new GUP proposed in our previous work. On the other hand, in the framework of this new GUP we calculate quantum corrections to the thermodynamic quantities of the Schwardzschild black hole, such as the Hawking temperature, the entropy, and the heat capacity, and give a remnant mass of the black hole at the end of the evaporation process. Moreover, we compare our results with that obtained in the frameworks of several other GUPs. In particular, we observe a significant difference between the situations with and without the consideration of the UV/IR mixing effect in the quantum corrections to the evaporation rate and the decay time. That is, the decay time can greatly be prolonged in the former case, which implies that the quantum correction from the UV/IR mixing effect may give rise to a radical rather than a tiny influence to the Hawking radiation.

  3. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  4. On the way towards a generalized entropy maximization procedure

    International Nuclear Information System (INIS)

    Bagci, G. Baris; Tirnakli, Ugur

    2009-01-01

    We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q element of (0,1] in contrast to the stationary distribution of the inverse power law obtained through the ordinary entropy maximization procedure. Another result of the generalized entropy maximization procedure is that one can naturally obtain all the possible stationary distributions associated with the Tsallis entropies by employing either ordinary or q-generalized Fourier transforms in the averaging procedure.

  5. Generalized Yosida Approximations Based on Relatively A-Maximal m-Relaxed Monotonicity Frameworks

    Directory of Open Access Journals (Sweden)

    Heng-you Lan

    2013-01-01

    Full Text Available We introduce and study a new notion of relatively A-maximal m-relaxed monotonicity framework and discuss some properties of a new class of generalized relatively resolvent operator associated with the relatively A-maximal m-relaxed monotone operator and the new generalized Yosida approximations based on relatively A-maximal m-relaxed monotonicity framework. Furthermore, we give some remarks to show that the theory of the new generalized relatively resolvent operator and Yosida approximations associated with relatively A-maximal m-relaxed monotone operators generalizes most of the existing notions on (relatively maximal monotone mappings in Hilbert as well as Banach space and can be applied to study variational inclusion problems and first-order evolution equations as well as evolution inclusions.

  6. Maximally Generalized Yang-Mills Model and Dynamical Breaking of Gauge Symmetry

    International Nuclear Information System (INIS)

    Wang Dianfu; Song Heshan

    2006-01-01

    A maximally generalized Yang-Mills model, which contains, besides the vector part V μ , also an axial-vector part A μ , a scalar part S, a pseudoscalar part P, and a tensor part T μν , is constructed and the dynamical breaking of gauge symmetry in the model is also discussed. It is shown, in terms of the Nambu-Jona-Lasinio mechanism, that the gauge symmetry breaking can be realized dynamically in the maximally generalized Yang-Mills model. The combination of the maximally generalized Yang-Mills model and the NJL mechanism provides a way to overcome the difficulties related to the Higgs field and the Higgs mechanism in the usual spontaneous symmetry breaking theory.

  7. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    International Nuclear Information System (INIS)

    King, Stephen F.; Zhang, Jue; Zhou, Shun

    2016-01-01

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ_2_3=45"∘±1"∘, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  8. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    Energy Technology Data Exchange (ETDEWEB)

    King, Stephen F. [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Zhang, Jue [Center for High Energy Physics, Peking University,Beijing 100871 (China); Zhou, Shun [Center for High Energy Physics, Peking University,Beijing 100871 (China); Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China)

    2016-12-06

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ{sub 23}=45{sup ∘}±1{sup ∘}, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  9. Solutions to the maximal spacelike hypersurface equation in generalized Robertson-Walker spacetimes

    Directory of Open Access Journals (Sweden)

    Henrique F. de Lima

    2018-03-01

    Full Text Available We apply some generalized maximum principles for establishing uniqueness and nonexistence results concerning maximal spacelike hypersurfaces immersed in a generalized Robertson-Walker (GRW spacetime, which is supposed to obey the so-called timelike convergence condition (TCC. As application, we study the uniqueness and nonexistence of entire solutions of a suitable maximal spacelike hypersurface equation in GRW spacetimes obeying the TCC.

  10. Maximal hypersurfaces and foliations of constant mean curvature in general relativity

    International Nuclear Information System (INIS)

    Marsden, J.E.; Tipler, F.J.; Texas Univ., Austin

    1980-01-01

    We prove theorems on existence, uniqueness and smoothness of maximal and constant mean curvature compact spacelike hypersurfaces in globally hyperbolic spacetimes. The uniqueness theorem for maximal hypersurfaces of Brill and Flaherty, which assumed matter everywhere, is extended to specetimes that are vacuum and non-flat or that satisfy a generic-type condition. In this connection we show that under general hypotheses, a spatially closed universe with a maximal hypersurface must be Wheeler universe; i.e. be closed in time as well. The existence of Lipschitz achronal maximal volume hypersurfaces under the hypothesis that candidate hypersurfaces are bounded away from the singularity is proved. This hypothesis is shown to be valid in two cases of interest: when the singularities are of strong curvature type, and when the singularity is a single ideal point. Some properties of these maximal volume hypersurfaces and difficulties with Avez' original arguments are discussed. The difficulties involve the possibility that the maximal volume hypersurface can be null on certain portions; we present an incomplete argument which suggests that these hypersurfaces are always smooth, but prove that an a priori bound on the second fundamental form does imply smoothness. An extension of the perturbation theorem of Choquet-Bruhat, Fischer and Marsden is given and conditions under which local foliantions by constant mean curvature hypersurfaces can be extended to global ones is obtained. (orig.)

  11. General spectral flow formula for fixed maximal domain

    DEFF Research Database (Denmark)

    Booss-Bavnbek, Bernhelm; Zhu, Chaofeng

    2005-01-01

    and symplectic analysis and give a full (and surprisingly short) proof of our General Spectral Flow Formula for the case of fixed maximal domain. As a side result, we establish local stability of weak inner unique continuation property (UCP) and explain its role for parameter dependent spectral theory....... of the resulting continuous family of (unbounded) self-adjoint Fredholm operators in terms of the Maslov index of two related curves of Lagrangian spaces. One curve is given by the varying domains, the other by the Cauchy data spaces. We provide rigorous definitions of the underlying concepts of spectral theory...

  12. The Effects of Minimal Length, Maximal Momentum, and Minimal Momentum in Entropic Force

    Directory of Open Access Journals (Sweden)

    Zhong-Wen Feng

    2016-01-01

    Full Text Available The modified entropic force law is studied by using a new kind of generalized uncertainty principle which contains a minimal length, a minimal momentum, and a maximal momentum. Firstly, the quantum corrections to the thermodynamics of a black hole are investigated. Then, according to Verlinde’s theory, the generalized uncertainty principle (GUP corrected entropic force is obtained. The result shows that the GUP corrected entropic force is related not only to the properties of the black holes but also to the Planck length and the dimensionless constants α0 and β0. Moreover, based on the GUP corrected entropic force, we also derive the modified Einstein’s field equation (EFE and the modified Friedmann equation.

  13. An O(n²) maximal planarization algorithm based on PQ-trees

    NARCIS (Netherlands)

    Kant, G.

    1992-01-01

    In this paper we investigate the problem how to delete a number of edges from a nonplanar graph G such that the resulting graph G’ is maximal planar, i.e., such that we cannot add an edge e E G – G’ to G’ without destroying planarity. Actually, our algorithm is a corrected and more generalized

  14. Equivalence of norms of Riesz potential and fractional maximal function in generalized Morrey spaces

    Czech Academy of Sciences Publication Activity Database

    Gogatishvili, Amiran; Mustafayev, R.Ch.

    2012-01-01

    Roč. 63, č. 1 (2012), s. 11-28 ISSN 0010-0757 R&D Projects: GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : generalized Morrey spaces * Riesz potential * fractional maximal operator Subject RIV: BA - General Mathematics Impact factor: 0.786, year: 2012 http://www.springerlink.com/content/w71502055j266878/

  15. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  16. General spectral flow formula for fixed maximal domain

    DEFF Research Database (Denmark)

    Booss-Bavnbek, Bernhelm; Zhu, Chaofeng

    2005-01-01

    of the resulting continuous family of (unbounded) self-adjoint Fredholm operators in terms of the Maslov index of two related curves of Lagrangian spaces. One curve is given by the varying domains, the other by the Cauchy data spaces. We provide rigorous definitions of the underlying concepts of spectral theory......We consider a continuous curve of linear elliptic formally self-adjoint differential operators of first order with smooth coefficients over a compact Riemannian manifold with boundary together with a continuous curve of global elliptic boundary value problems. We express the spectral flow...... and symplectic analysis and give a full (and surprisingly short) proof of our General Spectral Flow Formula for the case of fixed maximal domain. As a side result, we establish local stability of weak inner unique continuation property (UCP) and explain its role for parameter dependent spectral theory....

  17. Maximal Abelian gauge and a generalized BRST transformation

    Directory of Open Access Journals (Sweden)

    Shinichi Deguchi

    2016-05-01

    Full Text Available We apply a generalized Becchi–Rouet–Stora–Tyutin (BRST formulation to establish a connection between the gauge-fixed SU(2 Yang–Mills (YM theories formulated in the Lorenz gauge and in the Maximal Abelian (MA gauge. It is shown that the generating functional corresponding to the Faddeev–Popov (FP effective action in the MA gauge can be obtained from that in the Lorenz gauge by carrying out an appropriate finite and field-dependent BRST (FFBRST transformation. In this procedure, the FP effective action in the MA gauge is found from that in the Lorenz gauge by incorporating the contribution of non-trivial Jacobian due to the FFBRST transformation of the path integral measure. The present FFBRST formulation might be useful to see how Abelian dominance in the MA gauge is realized in the Lorenz gauge.

  18. Analysis of elliptically polarized maximally entangled states for bell inequality tests

    Science.gov (United States)

    Martin, A.; Smirr, J.-L.; Kaiser, F.; Diamanti, E.; Issautier, A.; Alibart, O.; Frey, R.; Zaquine, I.; Tanzilli, S.

    2012-06-01

    When elliptically polarized maximally entangled states are considered, i.e., states having a non random phase factor between the two bipartite polarization components, the standard settings used for optimal violation of Bell inequalities are no longer adapted. One way to retrieve the maximal amount of violation is to compensate for this phase while keeping the standard Bell inequality analysis settings. We propose in this paper a general theoretical approach that allows determining and adjusting the phase of elliptically polarized maximally entangled states in order to optimize the violation of Bell inequalities. The formalism is also applied to several suggested experimental phase compensation schemes. In order to emphasize the simplicity and relevance of our approach, we also describe an experimental implementation using a standard Soleil-Babinet phase compensator. This device is employed to correct the phase that appears in the maximally entangled state generated from a type-II nonlinear photon-pair source after the photons are created and distributed over fiber channels.

  19. AUC-Maximizing Ensembles through Metalearning.

    Science.gov (United States)

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  20. Profit maximization mitigates competition

    DEFF Research Database (Denmark)

    Dierker, Egbert; Grodal, Birgit

    1996-01-01

    We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...

  1. Generalized Tellegen Principle and Physical Correctness of System Representations

    Directory of Open Access Journals (Sweden)

    Vaclav Cerny

    2006-06-01

    Full Text Available The paper deals with a new problem of physical correctness detection in the area of strictly causal system representations. The proposed approach to the problem solution is based on generalization of Tellegen's theorem well known from electrical engineering. Consequently, mathematically as well as physically correct results are obtained. Some known and often used system representation structures are discussed from the developed point of view as an addition.

  2. General conditions for maximal violation of non-contextuality in discrete and continuous variables

    International Nuclear Information System (INIS)

    Laversanne-Finot, A; Ketterer, A; Coudreau, T; Keller, A; Milman, P; Barros, M R; Walborn, S P

    2017-01-01

    The contextuality of quantum mechanics can be shown by the violation of inequalities based on measurements of well chosen observables. An important property of such observables is that their expectation value can be expressed in terms of probabilities for obtaining two exclusive outcomes. Examples of such inequalities have been constructed using either observables with a dichotomic spectrum or using periodic functions obtained from displacement operators in phase space. Here we identify the general conditions on the spectral decomposition of observables demonstrating state independent contextuality of quantum mechanics. Our results not only unify existing strategies for maximal violation of state independent non-contextuality inequalities but also lead to new scenarios enabling such violations. Among the consequences of our results is the impossibility of having a state independent maximal violation of non-contextuality in the Peres–Mermin scenario with discrete observables of odd dimensions. (paper)

  3. Value maximizing maintenance policies under general repair

    International Nuclear Information System (INIS)

    Marais, Karen B.

    2013-01-01

    One class of maintenance optimization problems considers the notion of general repair maintenance policies where systems are repaired or replaced on failure. In each case the optimality is based on minimizing the total maintenance cost of the system. These cost-centric optimizations ignore the value dimension of maintenance and can lead to maintenance strategies that do not maximize system value. This paper applies these ideas to the general repair optimization problem using a semi-Markov decision process, discounted cash flow techniques, and dynamic programming to identify the value-optimal actions for any given time and system condition. The impact of several parameters on maintenance strategy, such as operating cost and revenue, system failure characteristics, repair and replacement costs, and the planning time horizon, is explored. This approach provides a quantitative basis on which to base maintenance strategy decisions that contribute to system value. These decisions are different from those suggested by traditional cost-based approaches. The results show (1) how the optimal action for a given time and condition changes as replacement and repair costs change, and identifies the point at which these costs become too high for profitable system operation; (2) that for shorter planning horizons it is better to repair, since there is no time to reap the benefits of increased operating profit and reliability; (3) how the value-optimal maintenance policy is affected by the system's failure characteristics, and hence whether it is worthwhile to invest in higher reliability; and (4) the impact of the repair level on the optimal maintenance policy. -- Highlights: •Provides a quantitative basis for maintenance strategy decisions that contribute to system value. •Shows how the optimal action for a given condition changes as replacement and repair costs change. •Shows how the optimal policy is affected by the system's failure characteristics. •Shows when it is

  4. Geometrical optics in general relativity: A study of the higher order corrections

    International Nuclear Information System (INIS)

    Anile, A.M.

    1976-01-01

    The higher order corrections to geometrical optics are studied in general relativity for an electromagnetic test wave. An explicit expression is found for the average energy--momentum tensor which takes into account the first-order corrections. Finally the first-order corrections to the well-known area-intensity law of geometrical optics are derived

  5. Dynamical Symmetry Breaking of Maximally Generalized Yang-Mills Model and Its Restoration at Finite Temperatures

    International Nuclear Information System (INIS)

    Wang Dianfu

    2008-01-01

    In terms of the Nambu-Jona-Lasinio mechanism, dynamical breaking of gauge symmetry for the maximally generalized Yang-Mills model is investigated. The gauge symmetry behavior at finite temperature is also investigated and it is shown that the gauge symmetry broken dynamically at zero temperature can be restored at finite temperatures

  6. String Loop Threshold Corrections for N=1 Generalized Coxeter Orbifolds

    OpenAIRE

    Kokorelis, Christos

    2000-01-01

    We discuss the calculation of threshold corrections to gauge coupling constants for the, only, non-decomposable class of abelian (2, 2) symmetric N=1 four dimensional heterotic orbifold models, where the internal twist is realized as a generalized Coxeter automorphism. The latter orbifold was singled out in earlier work as the only N=1 heterotic $Z_N$ orbifold that satisfy the phenomenological criteria of correct minimal gauge coupling unification and cancellation of target space modular anom...

  7. Maximally multipartite entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio

    2008-06-01

    We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.

  8. Some applications of the most general form of the higher-order GUP with minimal length uncertainty and maximal momentum

    Science.gov (United States)

    Shababi, Homa; Chung, Won Sang

    2018-04-01

    In this paper, using the new type of D-dimensional nonperturbative Generalized Uncertainty Principle (GUP) which has predicted both a minimal length uncertainty and a maximal observable momentum,1 first, we obtain the maximally localized states and express their connections to [P. Pedram, Phys. Lett. B 714, 317 (2012)]. Then, in the context of our proposed GUP and using the generalized Schrödinger equation, we solve some important problems including particle in a box and one-dimensional hydrogen atom. Next, implying modified Bohr-Sommerfeld quantization, we obtain energy spectra of quantum harmonic oscillator and quantum bouncer. Finally, as an example, we investigate some statistical properties of a free particle, including partition function and internal energy, in the presence of the mentioned GUP.

  9. Task-oriented maximally entangled states

    International Nuclear Information System (INIS)

    Agrawal, Pankaj; Pradhan, B

    2010-01-01

    We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.

  10. Generalized Second Law of Thermodynamics in Wormhole Geometry with Logarithmic Correction

    International Nuclear Information System (INIS)

    Faiz-ur-Rahman; Salahuddin; Akbar, M.

    2011-01-01

    We construct various cases for validity of the generalized second law (GSL) of thermodynamics by assuming the logarithmic correction to the horizon entropy of an evolving wormhole. It is shown that the GSL is always respected for α 0 ≤ 0, whereas for α 0 > 0 the GSL is respected only if πr 2 A+ /ℏ < α. (general)

  11. On maximal massive 3D supergravity

    OpenAIRE

    Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K

    2010-01-01

    ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...

  12. Maximal superintegrability of the generalized Kepler-Coulomb system on N-dimensional curved spaces

    International Nuclear Information System (INIS)

    Ballesteros, Angel; Herranz, Francisco J

    2009-01-01

    The superposition of the Kepler-Coulomb potential on the 3D Euclidean space with three centrifugal terms has recently been shown to be maximally superintegrable (Verrier and Evans 2008 J. Math. Phys. 49 022902) by finding an additional (hidden) integral of motion which is quartic in the momenta. In this paper, we present the generalization of this result to the N-dimensional spherical, hyperbolic and Euclidean spaces by making use of a unified symmetry approach that makes use of the curvature parameter. The resulting Hamiltonian, formed by the (curved) Kepler-Coulomb potential together with N centrifugal terms, is shown to be endowed with 2N - 1 functionally independent integrals of the motion: one of them is quartic and the remaining ones are quadratic. The transition from the proper Kepler-Coulomb potential, with its associated quadratic Laplace-Runge-Lenz N-vector, to the generalized system is fully described. The role of spherical, nonlinear (cubic) and coalgebra symmetries in all these systems is highlighted

  13. Strong Coupling Corrections in Quantum Thermodynamics

    Science.gov (United States)

    Perarnau-Llobet, M.; Wilming, H.; Riera, A.; Gallego, R.; Eisert, J.

    2018-03-01

    Quantum systems strongly coupled to many-body systems equilibrate to the reduced state of a global thermal state, deviating from the local thermal state of the system as it occurs in the weak-coupling limit. Taking this insight as a starting point, we study the thermodynamics of systems strongly coupled to thermal baths. First, we provide strong-coupling corrections to the second law applicable to general systems in three of its different readings: As a statement of maximal extractable work, on heat dissipation, and bound to the Carnot efficiency. These corrections become relevant for small quantum systems and vanish in first order in the interaction strength. We then move to the question of power of heat engines, obtaining a bound on the power enhancement due to strong coupling. Our results are exemplified on the paradigmatic non-Markovian quantum Brownian motion.

  14. Corrections to the General (2,4) and (4,4) FDTD Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Meierbachtol, Collin S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Smith, William S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shao, Xuan-Min [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-29

    The sampling weights associated with two general higher order FDTD schemes were derived by Smith, et al. and published in a IEEE Transactions on Antennas and Propagation article in 2012. Inconsistencies between governing equations and their resulting solutions were discovered within the article. In an effort to track down the root cause of these inconsistencies, the full three-dimensional, higher order FDTD dispersion relation was re-derived using MathematicaTM. During this process, two errors were identi ed in the article. Both errors are highlighted in this document. The corrected sampling weights are also provided. Finally, the original stability limits provided for both schemes are corrected, and presented in a more precise form. It is recommended any future implementations of the two general higher order schemes provided in the Smith, et al. 2012 article should instead use the sampling weights and stability conditions listed in this document.

  15. Using individual differences to predict job performance: correcting for direct and indirect restriction of range.

    Science.gov (United States)

    Sjöberg, Sofia; Sjöberg, Anders; Näswall, Katharina; Sverke, Magnus

    2012-08-01

    The present study investigates the relationship between individual differences, indicated by personality (FFM) and general mental ability (GMA), and job performance applying two different methods of correction for range restriction. The results, derived by analyzing meta-analytic correlations, show that the more accurate method of correcting for indirect range restriction increased the operational validity of individual differences in predicting job performance and that this increase primarily was due to general mental ability being a stronger predictor than any of the personality traits. The estimates for single traits can be applied in practice to maximize prediction of job performance. Further, differences in the relative importance of general mental ability in relation to overall personality assessment methods was substantive and the estimates provided enables practitioners to perform a correct utility analysis of their overall selection procedure. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.

  16. Geometric correction of radiographic images using general purpose image processing program

    International Nuclear Information System (INIS)

    Kim, Eun Kyung; Cheong, Ji Seong; Lee, Sang Hoon

    1994-01-01

    The present study was undertaken to compare geometric corrected image by general-purpose image processing program for the Apple Macintosh II computer (NIH Image, Adobe Photoshop) with standardized image by individualized custom fabricated alignment instrument. Two non-standardized periapical films with XCP film holder only were taken at the lower molar portion of 19 volunteers. Two standardized periapical films with customized XCP film holder with impression material on the bite-block were taken for each person. Geometric correction was performed with Adobe Photoshop and NIH Image program. Specially, arbitrary image rotation function of 'Adobe Photoshop' and subtraction with transparency function of 'NIH Image' were utilized. The standard deviations of grey values of subtracted images were used to measure image similarity. Average standard deviation of grey values of subtracted images if standardized group was slightly lower than that of corrected group. However, the difference was found to be statistically insignificant (p>0.05). It is considered that we can use 'NIH Image' and 'Adobe Photoshop' program for correction of nonstandardized film, taken with XCP film holder at lower molar portion.

  17. Utility Maximization in Nonconvex Wireless Systems

    CERN Document Server

    Brehmer, Johannes

    2012-01-01

    This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.

  18. A Criterion to Identify Maximally Entangled Four-Qubit State

    International Nuclear Information System (INIS)

    Zha Xinwei; Song Haiyang; Feng Feng

    2011-01-01

    Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)

  19. Tri-maximal vs. bi-maximal neutrino mixing

    International Nuclear Information System (INIS)

    Scott, W.G

    2000-01-01

    It is argued that data from atmospheric and solar neutrino experiments point strongly to tri-maximal or bi-maximal lepton mixing. While ('optimised') bi-maximal mixing gives an excellent a posteriori fit to the data, tri-maximal mixing is an a priori hypothesis, which is not excluded, taking account of terrestrial matter effects

  20. A method for partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    International Nuclear Information System (INIS)

    Barbee, David L; Holden, James E; Nickles, Robert J; Jeraj, Robert; Flynn, Ryan T

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated

  1. A Movable Phantom Design for Quantitative Evaluation of Motion Correction Studies on High Resolution PET Scanners

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Svarer, C.; Sibomana, M.

    2010-01-01

    maximization algorithm with modeling of the point spread function (3DOSEM-PSF), and they were corrected for motions based on external tracking information using the Polaris Vicra real-time stereo motion-tracking system. The new automatic, movable phantom has a robust design and is a potential quality......Head movements during brain imaging using high resolution positron emission tomography (PET) impair the image quality which, along with the improvement of the spatial resolution of PET scanners, in general, raises the importance of motion correction. Here, we present a new design for an automatic...

  2. Self-consistent collective-coordinate method for ''maximally-decoupled'' collective subspace and its boson mapping: Quantum theory of ''maximally-decoupled'' collective motion

    International Nuclear Information System (INIS)

    Marumori, T.; Sakata, F.; Maskawa, T.; Une, T.; Hashimoto, Y.

    1983-01-01

    The main purpose of this paper is to develop a full quantum theory, which is capable by itself of determining a ''maximally-decoupled'' collective motion. The paper is divided into two parts. In the first part, the motivation and basic idea of the theory are explained, and the ''maximal-decoupling condition'' on the collective motion is formulated within the framework of the time-dependent Hartree-Fock theory, in a general form called the invariance principle of the (time-dependent) Schrodinger equation. In the second part, it is shown that when the author positively utilize the invariance principle, we can construct a full quantum theory of the ''maximally-decoupled'' collective motion. This quantum theory is shown to be a generalization of the kinematical boson-mapping theories so far developed, in such a way that the dynamical ''maximal-decoupling condition'' on the collective motion is automatically satisfied

  3. Food systems in correctional settings

    DEFF Research Database (Denmark)

    Smoyer, Amy; Kjær Minke, Linda

    management of food systems may improve outcomes for incarcerated people and help correctional administrators to maximize their health and safety. This report summarizes existing research on food systems in correctional settings and provides examples of food programmes in prison and remand facilities......Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research....

  4. A Simple General Solution for Maximal Horizontal Range of Projectile Motion

    OpenAIRE

    Busic, Boris

    2005-01-01

    A convenient change of variables in the problem of maximizing the horizontal range of the projectile motion, with an arbitrary initial vertical position of the projectile, provides a simple, straightforward solution.

  5. Comprehensive strategy for corrective actions at the Savannah River Site General Separations Area

    International Nuclear Information System (INIS)

    Ebra, M.A.; Lewis, C.M.; Amidon, M.B.; McClain, L.K.

    1991-01-01

    The Savannah River Site (SRS), operated by the Westinghouse Savannah River Company for the United States Department of Energy, contains a number of waste disposal units that are currently in various stages of corrective action investigations, closures, and postclosure corrective actions. Many of these sites are located within a 40-square-kilometer area called the General Separations Area (GSA). The SRS has proposed to the regulatory agencies, the United States Environmental Protection Agency (EPA) and the South Carolina Department of Health and Environmental Control (SCDHEC), that groundwater investigations and corrective actions in this area be conducted under a comprehensive plan. The proposed plan would address the continuous nature of the hydrogeologic regime below the GSA and the potential for multiple sources of contamination. This paper describes the proposed approach

  6. Generalized INverse imaging (GIN): ultrafast fMRI with physiological noise correction.

    Science.gov (United States)

    Boyacioğlu, Rasim; Barth, Markus

    2013-10-01

    An ultrafast functional magnetic resonance imaging (fMRI) technique, called generalized inverse imaging (GIN), is proposed, which combines inverse imaging with a phase constraint-leading to a less underdetermined reconstruction-and physiological noise correction. A single 3D echo planar imaging (EPI) prescan is sufficient to obtain the necessary coil sensitivity information and reference images that are used to reconstruct standard images, so that standard analysis methods are applicable. A moving dots stimulus paradigm was chosen to assess the performance of GIN. We find that the spatial localization of activation for GIN is comparable to an EPI protocol and that maximum z-scores increase significantly. The high temporal resolution of GIN (50 ms) and the acquisition of the phase information enable unaliased sampling and regression of physiological signals. Using the phase time courses obtained from the 32 channels of the receiver coils as nuisance regressors in a general linear model results in significant improvement of the functional activation, rendering the acquisition of external physiological signals unnecessary. The proposed physiological noise correction can in principle be used for other fMRI protocols, such as simultaneous multislice acquisitions, which acquire the phase information sufficiently fast and sample physiological signals unaliased. Copyright © 2012 Wiley Periodicals, Inc.

  7. Maximization of Tsallis entropy in the combinatorial formulation

    International Nuclear Information System (INIS)

    Suyari, Hiroki

    2010-01-01

    This paper presents the mathematical reformulation for maximization of Tsallis entropy S q in the combinatorial sense. More concretely, we generalize the original derivation of Maxwell-Boltzmann distribution law to Tsallis statistics by means of the corresponding generalized multinomial coefficient. Our results reveal that maximization of S 2-q under the usual expectation or S q under q-average using the escort expectation are naturally derived from the combinatorial formulations for Tsallis statistics with respective combinatorial dualities, that is, one for additive duality and the other for multiplicative duality.

  8. Generalized radiative corrections for hadronic targets

    International Nuclear Information System (INIS)

    Calan, C. de; Navelet, H.; Picard, J.

    1990-02-01

    Besides the radiative corrections theory at the order α 2 for reactions involving an arbitrary number of particles, this report gives the complete formula for the correction factor δ in dσ = dσ Born (1 + δ). The only approximation made here - unavoidable in this formulation - is to assume that the Born amplitude can be factorized. This calculation is valid for spin zero bosons. In the 1/2 fermion case, an extra contribution appears which has been analytically computed using a minor approximation. Special care has been devoted to the 1/v divergence of the amplitude near thresholds [fr

  9. A definition of maximal CP-violation

    International Nuclear Information System (INIS)

    Roos, M.

    1985-01-01

    The unitary matrix of quark flavour mixing is parametrized in a general way, permitting a mathematically natural definition of maximal CP violation. Present data turn out to violate this definition by 2-3 standard deviations. (orig.)

  10. A comparison of different experimental methods for general recombination correction for liquid ionization chambers

    DEFF Research Database (Denmark)

    Andersson, Jonas; Kaiser, Franz-Joachim; Gomez, Faustino

    2012-01-01

    Radiation dosimetry of highly modulated dose distributions requires a detector with a high spatial resolution. Liquid filled ionization chambers (LICs) have the potential to become a valuable tool for the characterization of such radiation fields. However, the effect of an increased recombination...... of the charge carriers, as compared to using air as the sensitive medium has to be corrected for. Due to the presence of initial recombination in LICs, the correction for general recombination losses is more complicated than for air-filled ionization chambers. In the present work, recently published...

  11. A general X-ray fluorescence spectrometric technique based on simple corrections for matrix effects

    International Nuclear Information System (INIS)

    Kruidhof, H.

    1978-01-01

    The method reported, which is relatively simple and generally applicable for most materials, involves a combination of borax fusion with matrix effect corrections. The latter are done with algorithms, which are derived from the intensity formulae, together with empirical coefficients. (Auth.)

  12. A Generalized Correction for Attenuation.

    Science.gov (United States)

    Petersen, Anne C.; Bock, R. Darrell

    Use of the usual bivariate correction for attenuation with more than two variables presents two statistical problems. This pairwise method may produce a covariance matrix which is not at least positive semi-definite, and the bivariate procedure does not consider the possible influences of correlated errors among the variables. The method described…

  13. Bipartite Bell Inequality and Maximal Violation

    International Nuclear Information System (INIS)

    Li Ming; Fei Shaoming; Li-Jost Xian-Qing

    2011-01-01

    We present new bell inequalities for arbitrary dimensional bipartite quantum systems. The maximal violation of the inequalities is computed. The Bell inequality is capable of detecting quantum entanglement of both pure and mixed quantum states more effectively. (general)

  14. Phenomenology of maximal and near-maximal lepton mixing

    International Nuclear Information System (INIS)

    Gonzalez-Garcia, M. C.; Pena-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.

    2001-01-01

    The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ε(equivalent to)1-2sin 2 θ ex and quantify the present experimental status for |ε| e mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10 -8 eV 2 ∼ 2 ∼ -7 eV 2 . In the mass ranges Δm 2 ∼>1.5x10 -5 eV 2 and 4x10 -10 eV 2 ∼ 2 ∼ -7 eV 2 the full interval |ε| e mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay

  15. A multilevel search algorithm for the maximization of submodular functions applied to the quadratic cost partition problem

    NARCIS (Netherlands)

    Goldengorin, B.; Ghosh, D.

    Maximization of submodular functions on a ground set is a NP-hard combinatorial optimization problem. Data correcting algorithms are among the several algorithms suggested for solving this problem exactly and approximately. From the point of view of Hasse diagrams data correcting algorithms use

  16. Softly Broken Lepton Numbers: an Approach to Maximal Neutrino Mixing

    International Nuclear Information System (INIS)

    Grimus, W.; Lavoura, L.

    2001-01-01

    We discuss models where the U(1) symmetries of lepton numbers are responsible for maximal neutrino mixing. We pay particular attention to an extension of the Standard Model (SM) with three right-handed neutrino singlets in which we require that the three lepton numbers L e , L μ , and L τ be separately conserved in the Yukawa couplings, but assume that they are softly broken by the Majorana mass matrix M R of the neutrino singlets. In this framework, where lepton-number breaking occurs at a scale much higher than the electroweak scale, deviations from family lepton number conservation are calculable, i.e., finite, and lepton mixing stems exclusively from M R . We show that in this framework either maximal atmospheric neutrino mixing or maximal solar neutrino mixing or both can be imposed by invoking symmetries. In this way those maximal mixings are stable against radiative corrections. The model which achieves maximal (or nearly maximal) solar neutrino mixing assumes that there are two different scales in M R and that the lepton number (dash)L=L e -L μ -L τ 1 is conserved in between them. We work out the difference between this model and the conventional scenario where (approximate) (dash)L invariance is imposed directly on the mass matrix of the light neutrinos. (author)

  17. Color Fringe Correction by the Color Difference Prediction Using the Logistic Function.

    Science.gov (United States)

    Jang, Dong-Won; Park, Rae-Hong

    2017-05-01

    This paper proposes a new color fringe correction method that preserves the object color well by the color difference prediction using the logistic function. We observe two characteristics between normal edge (NE) and degraded edge (DE) due to color fringe: 1) the DE has relatively smaller R-G and B-G correlations than the NE and 2) the color difference in the NE can be fitted by the logistic function. The proposed method adjusts the color difference of the DE to the logistic function by maximizing the R-G and B-G correlations in the corrected color fringe image. The generalized logistic function with four parameters requires a high computational load to select the optimal parameters. In experiments, a one-parameter optimization can correct color fringe gracefully with a reduced computational load. Experimental results show that the proposed method restores well the original object color in the DE, whereas existing methods give monochromatic or distorted color.

  18. A cosmological problem for maximally symmetric supergravity

    International Nuclear Information System (INIS)

    German, G.; Ross, G.G.

    1986-01-01

    Under very general considerations it is shown that inflationary models of the universe based on maximally symmetric supergravity with flat potentials are unable to resolve the cosmological energy density (Polonyi) problem. (orig.)

  19. Oblique rotaton in canonical correlation analysis reformulated as maximizing the generalized coefficient of determination.

    Science.gov (United States)

    Satomura, Hironori; Adachi, Kohei

    2013-07-01

    To facilitate the interpretation of canonical correlation analysis (CCA) solutions, procedures have been proposed in which CCA solutions are orthogonally rotated to a simple structure. In this paper, we consider oblique rotation for CCA to provide solutions that are much easier to interpret, though only orthogonal rotation is allowed in the existing formulations of CCA. Our task is thus to reformulate CCA so that its solutions have the freedom of oblique rotation. Such a task can be achieved using Yanai's (Jpn. J. Behaviormetrics 1:46-54, 1974; J. Jpn. Stat. Soc. 11:43-53, 1981) generalized coefficient of determination for the objective function to be maximized in CCA. The resulting solutions are proved to include the existing orthogonal ones as special cases and to be rotated obliquely without affecting the objective function value, where ten Berge's (Psychometrika 48:519-523, 1983) theorems on suborthonormal matrices are used. A real data example demonstrates that the proposed oblique rotation can provide simple, easily interpreted CCA solutions.

  20. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  1. Quantum correction and ordering parameter for systems connected by a general point canonical transformation

    International Nuclear Information System (INIS)

    Yeon, Kyu Hwang; Hong, Suc Kyoung; Um, Chung In; George, Thomas F.

    2006-01-01

    With quantum operators corresponding to functions of the canonical variables, Schroedinger equations are constructed for systems corresponding to classical systems connected by a general point canonical transformation. Using the operator connecting quantum states between systems before and after the transformation, the quantum correction term and ordering parameter are obtained

  2. Non-common path aberration correction in an adaptive optics scanning ophthalmoscope.

    Science.gov (United States)

    Sulai, Yusufu N; Dubra, Alfredo

    2014-09-01

    The correction of non-common path aberrations (NCPAs) between the imaging and wavefront sensing channel in a confocal scanning adaptive optics ophthalmoscope is demonstrated. NCPA correction is achieved by maximizing an image sharpness metric while the confocal detection aperture is temporarily removed, effectively minimizing the monochromatic aberrations in the illumination path of the imaging channel. Comparison of NCPA estimated using zonal and modal orthogonal wavefront corrector bases provided wavefronts that differ by ~λ/20 in root-mean-squared (~λ/30 standard deviation). Sequential insertion of a cylindrical lens in the illumination and light collection paths of the imaging channel was used to compare image resolution after changing the wavefront correction to maximize image sharpness and intensity metrics. Finally, the NCPA correction was incorporated into the closed-loop adaptive optics control by biasing the wavefront sensor signals without reducing its bandwidth.

  3. Chamaebatiaria millefolium (Torr.) Maxim.: fernbush

    Science.gov (United States)

    Nancy L. Shaw; Emerenciana G. Hurd

    2008-01-01

    Fernbush - Chamaebatiaria millefolium (Torr.) Maxim. - the only species in its genus, is endemic to the Great Basin, Colorado Plateau, and adjacent areas of the western United States. It is an upright, generally multistemmed, sweetly aromatic shrub 0.3 to 2 m tall. Bark of young branches is brown and becomes smooth and gray with age. Leaves are leathery, alternate,...

  4. Metrological Array of Cyber-Physical Systems. Part 7. Additive Error Correction for Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-06-01

    Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.

  5. A Simulated Annealing method to solve a generalized maximal covering location problem

    Directory of Open Access Journals (Sweden)

    M. Saeed Jabalameli

    2011-04-01

    Full Text Available The maximal covering location problem (MCLP seeks to locate a predefined number of facilities in order to maximize the number of covered demand points. In a classical sense, MCLP has three main implicit assumptions: all or nothing coverage, individual coverage, and fixed coverage radius. By relaxing these assumptions, three classes of modelling formulations are extended: the gradual cover models, the cooperative cover models, and the variable radius models. In this paper, we develop a special form of MCLP which combines the characteristics of gradual cover models, cooperative cover models, and variable radius models. The proposed problem has many applications such as locating cell phone towers. The model is formulated as a mixed integer non-linear programming (MINLP. In addition, a simulated annealing algorithm is used to solve the resulted problem and the performance of the proposed method is evaluated with a set of randomly generated problems.

  6. 77 FR 42334 - Meeting of the Attorney General's National Task Force on Children Exposed to Violence (Correction)

    Science.gov (United States)

    2012-07-18

    ... Attorney General's National Task Force on Children Exposed to Violence (Correction) AGENCY: Office of...) published a notice in the Federal Register on July 2, 2012, announcing a meeting of the Attorney General's..., but rather, will be conducting preparatory work related to developing a draft report to the Attorney...

  7. Finding Maximal Pairs with Bounded Gap

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.

    1999-01-01

    . In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....

  8. Maximal Bell's inequality violation for non-maximal entanglement

    International Nuclear Information System (INIS)

    Kobayashi, M.; Khanna, F.; Mann, A.; Revzen, M.; Santana, A.

    2004-01-01

    Bell's inequality violation (BIQV) for correlations of polarization is studied for a product state of two two-mode squeezed vacuum (TMSV) states. The violation allowed is shown to attain its maximal limit for all values of the squeezing parameter, ζ. We show via an explicit example that a state whose entanglement is not maximal allow maximal BIQV. The Wigner function of the state is non-negative and the average value of either polarization is nil

  9. On Maximally Dissipative Shock Waves in Nonlinear Elasticity

    OpenAIRE

    Knowles, James K.

    2010-01-01

    Shock waves in nonlinearly elastic solids are, in general, dissipative. We study the following question: among all plane shock waves that can propagate with a given speed in a given one-dimensional nonlinearly elastic bar, which one—if any—maximizes the rate of dissipation? We find that the answer to this question depends strongly on the qualitative nature of the stress-strain relation characteristic of the given material. When maximally dissipative shocks do occur, they propagate according t...

  10. Gradient Dynamics and Entropy Production Maximization

    Science.gov (United States)

    Janečka, Adam; Pavelka, Michal

    2018-01-01

    We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.

  11. Robust Active Label Correction

    DEFF Research Database (Denmark)

    Kremer, Jan; Sha, Fei; Igel, Christian

    2018-01-01

    for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...

  12. Maximizing and customer loyalty: Are maximizers less loyal?

    Directory of Open Access Journals (Sweden)

    Linda Lai

    2011-06-01

    Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.

  13. Multi-objective optimal reactive power dispatch to maximize power system social welfare in the presence of generalized unified power flow controller

    Directory of Open Access Journals (Sweden)

    Suresh Chintalapudi Venkata

    2015-09-01

    Full Text Available In this paper a novel non-linear optimization problem is formulated to maximize the social welfare in restructured environment with generalized unified power flow controller (GUPFC. This paper presents a methodology to optimally allocate the reactive power by minimizing voltage deviation at load buses and total transmission power losses so as to maximize the social welfare. The conventional active power generation cost function is modified by combining costs of reactive power generated by the generators, shunt capacitors and total power losses to it. The formulated objectives are optimized individually and simultaneously as multi-objective optimization problem, while satisfying equality, in-equality, practical and device operational constraints. A new optimization method, based on two stage initialization and random distribution processes is proposed to test the effectiveness of the proposed approach on IEEE-30 bus system, and the detailed analysis is carried out.

  14. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  15. Generalized model screening potentials for Fermi-Dirac plasmas

    Science.gov (United States)

    Akbari-Moghanjoughi, M.

    2016-04-01

    In this paper, some properties of relativistically degenerate quantum plasmas, such as static ion screening, structure factor, and Thomson scattering cross-section, are studied in the framework of linearized quantum hydrodynamic theory with the newly proposed kinetic γ-correction to Bohm term in low frequency limit. It is found that the correction has a significant effect on the properties of quantum plasmas in all density regimes, ranging from solid-density up to that of white dwarf stars. It is also found that Shukla-Eliasson attractive force exists up to a few times the density of metals, and the ionic correlations are seemingly apparent in the radial distribution function signature. Simplified statically screened attractive and repulsive potentials are presented for zero-temperature Fermi-Dirac plasmas, valid for a wide range of quantum plasma number-density and atomic number values. Moreover, it is observed that crystallization of white dwarfs beyond a critical core number-density persists with this new kinetic correction, but it is shifted to a much higher number-density value of n0 ≃ 1.94 × 1037 cm-3 (1.77 × 1010 gr cm-3), which is nearly four orders of magnitude less than the nuclear density. It is found that the maximal Thomson scattering with the γ-corrected structure factor is a remarkable property of white dwarf stars. However, with the new γ-correction, the maximal scattering shifts to the spectrum region between hard X-ray and low-energy gamma-rays. White dwarfs composed of higher atomic-number ions are observed to maximally Thomson-scatter at slightly higher wavelengths, i.e., they maximally scatter slightly low-energy photons in the presence of correction.

  16. [Interlamellar circular keratoplasty for correction of high myopia].

    Science.gov (United States)

    Beliaev, V S; Dushin, N V; Gonchar, P A; Frolov, M A; Barashkov, V I; Kravchinina, V V; Balikoev, T M

    1995-01-01

    A new method is proposed for the correction of high myopia: Interlamellar circular keratoplasty. This method has been used in 15 patients (17 eyes) aged 18 to 54 with 9 to 17 diopters myopia. Vision acuity attained was at least 0.5 in 10 (60.3%) patients; in 7 (35.2%) patients vision acuity was 0.3 without correction, that is, was equal to the maximal vision acuity with the optimal correction. The highest refraction effect was 15.0 diopters. The patients were followed up for 3 months to 16 years. The proposed method for the correction of high myopia is highly effective, simple, and is recommended for clinical practice.

  17. The strategy of spectral shifts and the sets of correct methods for calculating eigenvalues of general tridiagonal matrices

    International Nuclear Information System (INIS)

    Emel'yanenko, G.A.; Sek, I.E.

    1988-01-01

    Many correctable unknown methods for eigenvalue calculation of general tridiagonal matrices with real elements; criteria of singular tridiagonal matrices; necessary and sufficient conditions of tridiagonal matrix degeneracy; process with boundary conditions according to calculation processes of general upper and lower tridiagonal matrix minors are obtained. 6 refs

  18. The Boundary Crossing Theorem and the Maximal Stability Interval

    Directory of Open Access Journals (Sweden)

    Jorge-Antonio López-Renteria

    2011-01-01

    useful tools in the study of the stability of family of polynomials. Although both of these theorem seem intuitively obvious, they can be used for proving important results. In this paper, we give generalizations of these two theorems and we apply such generalizations for finding the maximal stability interval.

  19. Bounds on absolutely maximally entangled states from shadow inequalities, and the quantum MacWilliams identity

    Science.gov (United States)

    Huber, Felix; Eltschka, Christopher; Siewert, Jens; Gühne, Otfried

    2018-04-01

    A pure multipartite quantum state is called absolutely maximally entangled (AME), if all reductions obtained by tracing out at least half of its parties are maximally mixed. Maximal entanglement is then present across every bipartition. The existence of such states is in many cases unclear. With the help of the weight enumerator machinery known from quantum error correction and the shadow inequalities, we obtain new bounds on the existence of AME states in dimensions larger than two. To complete the treatment on the weight enumerator machinery, the quantum MacWilliams identity is derived in the Bloch representation. Finally, we consider AME states whose subsystems have different local dimensions, and present an example for a 2×3×3×3 system that shows maximal entanglement across every bipartition.

  20. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...

  1. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    International Nuclear Information System (INIS)

    Nutku, Yavuz

    2003-01-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems

  2. Implications of maximal Jarlskog invariant and maximal CP violation

    International Nuclear Information System (INIS)

    Rodriguez-Jauregui, E.; Universidad Nacional Autonoma de Mexico

    2001-04-01

    We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)

  3. Generalized model screening potentials for Fermi-Dirac plasmas

    International Nuclear Information System (INIS)

    Akbari-Moghanjoughi, M.

    2016-01-01

    In this paper, some properties of relativistically degenerate quantum plasmas, such as static ion screening, structure factor, and Thomson scattering cross-section, are studied in the framework of linearized quantum hydrodynamic theory with the newly proposed kinetic γ-correction to Bohm term in low frequency limit. It is found that the correction has a significant effect on the properties of quantum plasmas in all density regimes, ranging from solid-density up to that of white dwarf stars. It is also found that Shukla-Eliasson attractive force exists up to a few times the density of metals, and the ionic correlations are seemingly apparent in the radial distribution function signature. Simplified statically screened attractive and repulsive potentials are presented for zero-temperature Fermi-Dirac plasmas, valid for a wide range of quantum plasma number-density and atomic number values. Moreover, it is observed that crystallization of white dwarfs beyond a critical core number-density persists with this new kinetic correction, but it is shifted to a much higher number-density value of n_0 ≃ 1.94 × 10"3"7 cm"−"3 (1.77 × 10"1"0 gr cm"−"3), which is nearly four orders of magnitude less than the nuclear density. It is found that the maximal Thomson scattering with the γ-corrected structure factor is a remarkable property of white dwarf stars. However, with the new γ-correction, the maximal scattering shifts to the spectrum region between hard X-ray and low-energy gamma-rays. White dwarfs composed of higher atomic-number ions are observed to maximally Thomson-scatter at slightly higher wavelengths, i.e., they maximally scatter slightly low-energy photons in the presence of correction.

  4. Generalized model screening potentials for Fermi-Dirac plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Akbari-Moghanjoughi, M. [Faculty of Sciences, Department of Physics, Azarbaijan Shahid Madani University, 51745-406 Tabriz, Iran and International Centre for Advanced Studies in Physical Sciences and Institute for Theoretical Physics, Ruhr University Bochum, D-44780 Bochum (Germany)

    2016-04-15

    In this paper, some properties of relativistically degenerate quantum plasmas, such as static ion screening, structure factor, and Thomson scattering cross-section, are studied in the framework of linearized quantum hydrodynamic theory with the newly proposed kinetic γ-correction to Bohm term in low frequency limit. It is found that the correction has a significant effect on the properties of quantum plasmas in all density regimes, ranging from solid-density up to that of white dwarf stars. It is also found that Shukla-Eliasson attractive force exists up to a few times the density of metals, and the ionic correlations are seemingly apparent in the radial distribution function signature. Simplified statically screened attractive and repulsive potentials are presented for zero-temperature Fermi-Dirac plasmas, valid for a wide range of quantum plasma number-density and atomic number values. Moreover, it is observed that crystallization of white dwarfs beyond a critical core number-density persists with this new kinetic correction, but it is shifted to a much higher number-density value of n{sub 0} ≃ 1.94 × 10{sup 37} cm{sup −3} (1.77 × 10{sup 10} gr cm{sup −3}), which is nearly four orders of magnitude less than the nuclear density. It is found that the maximal Thomson scattering with the γ-corrected structure factor is a remarkable property of white dwarf stars. However, with the new γ-correction, the maximal scattering shifts to the spectrum region between hard X-ray and low-energy gamma-rays. White dwarfs composed of higher atomic-number ions are observed to maximally Thomson-scatter at slightly higher wavelengths, i.e., they maximally scatter slightly low-energy photons in the presence of correction.

  5. Tripartite entanglement in qudit stabilizer states and application in quantum error correction

    Energy Technology Data Exchange (ETDEWEB)

    Looi, Shiang Yong; Griffiths, Robert B. [Department of Physics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)

    2011-11-15

    Consider a stabilizer state on n qudits, each of dimension D with D being a prime or squarefree integer, divided into three mutually disjoint sets or parts. Generalizing a result of Bravyi et al.[J. Math. Phys. 47, 062106 (2006)] for qubits (D=2), we show that up to local unitaries, the three parts of the state can be written as tensor product of unentangled signle-qudit states, maximally entangled Einstein-Podolsky-Rosen (EPR) pairs, and tripartite Greenberger-Horne-Zeilinger (GHZ) states. We employ this result to obtain a complete characterization of the properties of a class of channels associated with stabilizer error-correcting codes, along with their complementary channels.

  6. Scalable Nonlinear AUC Maximization Methods

    OpenAIRE

    Khalid, Majdi; Ray, Indrakshi; Chitsaz, Hamidreza

    2017-01-01

    The area under the ROC curve (AUC) is a measure of interest in various machine learning and data mining applications. It has been widely used to evaluate classification performance on heavily imbalanced data. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines because of their capability in modeling the complex nonlinear structure underlying most real world-data. However, the high training complexity renders the kernelize...

  7. Efficient maximal Poisson-disk sampling and remeshing on surfaces

    KAUST Repository

    Guo, Jianwei; Yan, Dongming; Jia, Xiaohong; Zhang, Xiaopeng

    2015-01-01

    Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.

  8. Efficient maximal Poisson-disk sampling and remeshing on surfaces

    KAUST Repository

    Guo, Jianwei

    2015-02-01

    Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.

  9. Maximizers versus satisficers

    Directory of Open Access Journals (Sweden)

    Andrew M. Parker

    2007-12-01

    Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.

  10. A unified MGF-based capacity analysis of diversity combiners over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-03-01

    Unified exact ergodic capacity results for L-branch coherent diversity combiners including equal-gain combining (EGC) and maximal-ratio combining (MRC) are not known. This paper develops a novel generic framework for the capacity analysis of L-branch EGC/MRC over generalized fading channels. The framework is used to derive new results for the gamma-shadowed generalized Nakagami-m fading model which can be a suitable model for the fading environments encountered by high frequency (60 GHz and above) communications. The mathematical formalism is illustrated with some selected numerical and simulation results confirming the correctness of our newly proposed framework. © 2012 IEEE.

  11. Some Direct and Generalized Effects of Replacing an Autistic Man's Echolalia with Correct Responses to Questions.

    Science.gov (United States)

    McMorrow, Martin J.; Foxx, R. M.

    1986-01-01

    The use of operant procedures was extended to decrease immediate echolalia and increase appropriate responding to questions of a 21-year-old autistic man. Multiple baseline designs demonstrated that echolalia was rapidly replaced with correct stimulus-specific responses. A variety of generalized improvements were observed in verbal responses to…

  12. A Maximal Element Theorem in FWC-Spaces and Its Applications

    Science.gov (United States)

    Hu, Qingwen; Miao, Yulin

    2014-01-01

    A maximal element theorem is proved in finite weakly convex spaces (FWC-spaces, in short) which have no linear, convex, and topological structure. Using the maximal element theorem, we develop new existence theorems of solutions to variational relation problem, generalized equilibrium problem, equilibrium problem with lower and upper bounds, and minimax problem in FWC-spaces. The results represented in this paper unify and extend some known results in the literature. PMID:24782672

  13. Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-01-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.

  14. A network of helping: Generalized reciprocity and cooperative behavior in response to peer and staff affirmations and corrections among therapeutic community residents.

    Science.gov (United States)

    Doogan, Nathan J; Warren, Keith

    2017-01-01

    Clinical theory in therapeutic communities (TCs) for substance abuse treatment emphasizes the importance of peer interactions in bringing about change. This implies that residents will respond in a more prosocial manner to peer versus staff intervention and that residents will interact in such a way as to maintain cooperation. The data consist of electronic records of peer and staff affirmations and corrections at four corrections-based therapeutic community units. We treat the data as a directed social network of affirmations. We sampled 100 resident days from each unit (n = 400) and used a generalized linear mixed effects network time series model to analyze the predictors of sending and receiving affirmations and corrections. The model allowed us to control for characteristics of individuals as well as network-related dependencies. Residents show generalized reciprocity following peer affirmations, but not following staff affirmations. Residents did not respond to peer corrections by increasing affirmations, but responded to staff corrections by decreasing affirmations. Residents directly reciprocated peer affirmations. Residents were more likely to affirm a peer whom they had recently corrected. Residents were homophilous with respect to race, age and program entry time. This analysis demonstrates that TC residents react more prosocially to behavioral intervention by peers than by staff. Further, the community exhibits generalized and direct reciprocity, mechanisms known to foster cooperation in groups. Multiple forms of homophily influence resident interactions. These findings validate TC clinical theory while suggesting paths to improved outcomes.

  15. Principle of Entropy Maximization for Nonequilibrium Steady States

    DEFF Research Database (Denmark)

    Shapiro, Alexander; Stenby, Erling Halfdan

    2002-01-01

    The goal of this contribution is to find out to what extent the principle of entropy maximization, which serves as a basis for the equilibrium thermodynamics, may be generalized onto non-equilibrium steady states. We prove a theorem that, in the system of thermodynamic coordinates, where entropy...

  16. Maximally-localized position, Euclidean path-integral, and thermodynamics in GUP quantum mechanics

    Science.gov (United States)

    Bernardo, Reginald Christian S.; Esguerra, Jose Perico H.

    2018-04-01

    In dealing with quantum mechanics at very high energies, it is essential to adapt to a quasiposition representation using the maximally-localized states because of the generalized uncertainty principle. In this paper, we look at maximally-localized states as eigenstates of the operator ξ = X + iβP that we refer to as the maximally-localized position. We calculate the overlap between maximally-localized states and show that the identity operator can be expressed in terms of the maximally-localized states. Furthermore, we show that the maximally-localized position is diagonal in momentum-space and that the maximally-localized position and its adjoint satisfy commutation and anti-commutation relations reminiscent of the harmonic oscillator commutation and anti-commutation relations. As application, we use the maximally-localized position in developing the Euclidean path-integral and introduce the compact form of the propagator for maximal localization. The free particle momentum-space propagator and the propagator for maximal localization are analytically evaluated up to quadratic-order in β. Finally, we obtain a path-integral expression for the partition function of a thermodynamic system using the maximally-localized states. The partition function of a gas of noninteracting particles is evaluated. At temperatures exceeding the Planck energy, we obtain the gas' maximum internal energy N / 2 β and recover the zero heat capacity of an ideal gas.

  17. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  18. Maximizers versus satisficers

    OpenAIRE

    Andrew M. Parker; Wandi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...

  19. Half-maximal supersymmetry from exceptional field theory

    Energy Technology Data Exchange (ETDEWEB)

    Malek, Emanuel [Arnold Sommerfeld Center for Theoretical Physics, Department fuer Physik, Ludwig-Maximilians-Universitaet Muenchen (Germany)

    2017-10-15

    We study D ≥ 4-dimensional half-maximal flux backgrounds using exceptional field theory. We define the relevant generalised structures and also find the integrability conditions which give warped half-maximal Minkowski{sub D} and AdS{sub D} vacua. We then show how to obtain consistent truncations of type II / 11-dimensional SUGRA which break half the supersymmetry. Such truncations can be defined on backgrounds admitting exceptional generalised SO(d - 1 - N) structures, where d = 11 - D, and N is the number of vector multiplets obtained in the lower-dimensional theory. Our procedure yields the most general embedding tensors satisfying the linear constraint of half-maximal gauged SUGRA. We use this to prove that all D ≥ 4 half-maximal warped AdS{sub D} and Minkowski{sub D} vacua of type II / 11-dimensional SUGRA admit a consistent truncation keeping only the gravitational supermultiplet. We also show to obtain heterotic double field theory from exceptional field theory and comment on the M-theory / heterotic duality. In five dimensions, we find a new SO(5, N) double field theory with a (6 + N)-dimensional extended space. Its section condition has one solution corresponding to 10-dimensional N = 1 supergravity and another yielding six-dimensional N = (2, 0) SUGRA. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  20. Off-shell representations of maximally-extended supersymmetry

    International Nuclear Information System (INIS)

    Cox, P.H.

    1985-01-01

    A general theorem on the necessity of off-shell central charges in representations of maximally-extended supersymmetry (number of spinor charges - 4 x largest spin) is presented. A procedure for building larger and higher-N representations is also explored; a (noninteracting) N=8, maximum spin 2, off-shell representation is achieved. Difficulties in adding interactions for this representation are discussed

  1. Maximal Regularity of the Discrete Harmonic Oscillator Equation

    Directory of Open Access Journals (Sweden)

    Airton Castro

    2009-01-01

    Full Text Available We give a representation of the solution for the best approximation of the harmonic oscillator equation formulated in a general Banach space setting, and a characterization of lp-maximal regularity—or well posedness—solely in terms of R-boundedness properties of the resolvent operator involved in the equation.

  2. Simplified correction of g-value measurements

    DEFF Research Database (Denmark)

    Duer, Karsten

    1998-01-01

    been carried out using a detailed physical model based on ISO9050 and prEN410 but using polarized data for non-normal incidence. This model is only valid for plane, clear glazings and therefor not suited for corrections of measurements performed on complex glazings. To investigate a more general...... correction procedure the results from the measurements on the Interpane DGU have been corrected using the principle outlined in (Rosenfeld, 1996). This correction procedure is more general as corrections can be carried out without a correct physical model of the investigated glazing. On the other hand...... the way this “general” correction procedure is used is not always in accordance to the physical conditions....

  3. Entropy maximization

    Indian Academy of Sciences (India)

    Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.

  4. The Bianchi classification of maximal D = 8 gauged supergravities

    NARCIS (Netherlands)

    Bergshoeff, Eric; Gran, Ulf; Linares, Román; Nielsen, Mikkel; Ortín, Tomás; Roest, Diederik

    2003-01-01

    We perform the generalized dimensional reduction of D = 11 supergravity over three-dimensional group manifolds as classified by Bianchi. Thus, we construct 11 different maximal D = 8 gauged supergravities, two of which have an additional parameter. One class of group manifolds (class B) leads to

  5. The Bianchi classification of maximal D=8 gauged supergravities

    NARCIS (Netherlands)

    Bergshoeff, E; Gran, U; Linares, R; Nielsen, M; Ortin, T; Roest, D

    2003-01-01

    We perform the generalized dimensional reduction of D = 11 supergravity over three-dimensional group manifolds as classified by Bianchi. Thus, we construct 11 different maximal D = 8 gauged supergravities, two of which have an additional parameter. One class of group manifolds (class B) leads to

  6. Edge corrections to electromagnetic Casimir energies from general-purpose Mathieu-function routines

    Science.gov (United States)

    Blose, Elizabeth Noelle; Ghimire, Biswash; Graham, Noah; Stratton-Smith, Jeremy

    2015-01-01

    Scattering theory methods make it possible to calculate the Casimir energy of a perfectly conducting elliptic cylinder opposite a perfectly conducting plane in terms of Mathieu functions. In the limit of zero radius, the elliptic cylinder becomes a finite-width strip, which allows for the study of edge effects. However, existing packages for computing Mathieu functions are insufficient for this calculation because none can compute Mathieu functions of both the first and second kind for complex arguments. To address this shortcoming, we have written a general-purpose Mathieu-function package, based on algorithms developed by Alhargan. We use these routines to find edge corrections to the proximity force approximation for the Casimir energy of a perfectly conducting strip opposite a perfectly conducting plane.

  7. Maximal multiplier operators in Lp(·)(Rn) spaces

    Czech Academy of Sciences Publication Activity Database

    Gogatishvili, Amiran; Kopaliani, T.

    2016-01-01

    Roč. 140, č. 4 (2016), s. 86-97 ISSN 0007-4497 R&D Projects: GA ČR GA13-14743S Institutional support: RVO:67985840 Keywords : spherical maximal function * variable Lebesque spaces * boundedness result Subject RIV: BA - General Mathematics Impact factor: 0.750, year: 2016 http://www.sciencedirect.com/science/article/pii/S0007449715000329

  8. Entropy Maximization

    Indian Academy of Sciences (India)

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ⁡ ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...

  9. Maximally incompatible quantum observables

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)

    2014-05-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  10. Maximally incompatible quantum observables

    International Nuclear Information System (INIS)

    Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro; Ziman, Mario

    2014-01-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  11. Nonzero θ13 and neutrino masses from the modified tri-bi-maximal neutrino mixing matrix

    International Nuclear Information System (INIS)

    Damanik, A.

    2014-01-01

    There are 3 types of neutrino mixing matrices: tri-bi-maximal, bi-maximal and democratic. These 3 types of neutrino mixing matrices predict that the mixing angle θ 13 should be null. Motivated by the recent experimental evidence of nonzero and relatively large θ 13 , we modified the tribimaximal mixing matrix by introducing a simple perturbation matrix into tribimaximal neutrino mixing matrix. In this scenario, we obtained nonzero mixing angle θ 13 =7.9 degrees which is in agreement with the present experimental results. By imposing 2 zeros texture into the obtained neutrino mass matrix from modified tribimaximal mixing matrix, we then have the neutrino mass spectrum in normal hierarchy. Some phenomenological implications are also discussed. It appears that if we use the solar neutrino squared-mass difference to determine the values of neutrino masses, then we cannot have the correct value for the atmospheric squared-mass difference. Conversely, if we use the experimental value of the squared-mass difference to determine the neutrino masses, then we cannot have the correct value for the solar neutrino squared-mass difference

  12. A general dead-time correction method based on live-time stamping. Application to the measurement of short-lived radionuclides.

    Science.gov (United States)

    Chauvenet, B; Bobin, C; Bouchard, J

    2017-12-01

    Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Maximizing the Spread of Influence via Generalized Degree Discount.

    Science.gov (United States)

    Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun

    2016-01-01

    It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods.

  14. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  15. Developing maximal neuromuscular power: part 2 - training considerations for improving maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-02-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and

  16. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  17. Generalized Bekenstein-Hawking system: logarithmic correction

    International Nuclear Information System (INIS)

    Chakraborty, Subenoy

    2014-01-01

    The present work is a generalization of the recent work [arXiv.1206.1420] on the modified Hawking temperature on the event horizon. Here the Hawking temperature is generalized by multiplying the modified Hawking temperature by a variable parameter α representing the ratio of the growth rate of the apparent horizon to that of event horizon. It is found that both the first and the generalized second law of thermodynamics are valid on the event horizon for any fluid distribution. Subsequently, the Bekenstein entropy is modified on the event horizon and the thermodynamical laws are examined. Finally, an interpretation of the parameters involved is presented. (orig.)

  18. Efficient Conservation in a Utility-Maximization Framework

    Directory of Open Access Journals (Sweden)

    Frank W. Davis

    2006-06-01

    Full Text Available Systematic planning for biodiversity conservation is being conducted at scales ranging from global to national to regional. The prevailing planning paradigm is to identify the minimum land allocations needed to reach specified conservation targets or maximize the amount of conservation accomplished under an area or budget constraint. We propose a more general formulation for setting conservation priorities that involves goal setting, assessing the current conservation system, developing a scenario of future biodiversity given the current conservation system, and allocating available conservation funds to alter that scenario so as to maximize future biodiversity. Under this new formulation for setting conservation priorities, the value of a site depends on resource quality, threats to resource quality, and costs. This planning approach is designed to support collaborative processes and negotiation among competing interest groups. We demonstrate these ideas with a case study of the Sierra Nevada bioregion of California.

  19. Application of the two-dose-rate method for general recombination correction for liquid ionization chambers in continuous beams

    International Nuclear Information System (INIS)

    Andersson, Jonas; Toelli, Heikki

    2011-01-01

    A method to correct for the general recombination losses for liquid ionization chambers in continuous beams has been developed. The proposed method has been derived from Greening's theory for continuous beams and is based on measuring the signal from a liquid ionization chamber and an air filled monitor ionization chamber at two different dose rates. The method has been tested with two plane parallel liquid ionization chambers in a continuous radiation x-ray beam with a tube voltage of 120 kV and with dose rates between 2 and 13 Gy min -1 . The liquids used as sensitive media in the chambers were isooctane (C 8 H 18 ) and tetramethylsilane (Si(CH 3 ) 4 ). The general recombination effect was studied using chamber polarizing voltages of 100, 300, 500, 700 and 900 V for both liquids. The relative standard deviation of the results for the collection efficiency with respect to general recombination was found to be a maximum of 0.7% for isooctane and 2.4% for tetramethylsilane. The results are in excellent agreement with Greening's theory for collection efficiencies over 90%. The measured and corrected signals from the liquid ionization chambers used in this work are in very good agreement with the air filled monitor chamber with respect to signal to dose linearity.

  20. Classical evolution and quantum generation in generalized gravity theories including string corrections and tachyons: Unified analyses

    International Nuclear Information System (INIS)

    Hwang, Jai-chan; Noh, Hyerim

    2005-01-01

    We present cosmological perturbation theory based on generalized gravity theories including string theory correction terms and a tachyonic complication. The classical evolution as well as the quantum generation processes in these varieties of gravity theories are presented in unified forms. These apply both to the scalar- and tensor-type perturbations. Analyses are made based on the curvature variable in two different gauge conditions often used in the literature in Einstein's gravity; these are the curvature variables in the comoving (or uniform-field) gauge and the zero-shear gauge. Applications to generalized slow-roll inflation and its consequent power spectra are derived in unified forms which include a wide range of inflationary scenarios based on Einstein's gravity and others

  1. [Interlamellar sectoral keratoplasty in the surgical correction of astigmatism].

    Science.gov (United States)

    Frolov, M A; Beliaev, V S; Dushin, N V; Kravchinina, V V; Barashkov, V I; Gonchar, P A

    1996-01-01

    A new original method of interlamellar sectorial keratoplasty is proposed for surgical correction of astigmatism. Eleven operations were carried out in 8 patients (11 eyes) with astigmatism of 4 to 7.0 diopters. Vision acuity without correction was 0.6 to 1.0 in 5 patients (7 eyes, 63.6%). In 2 patients (2 eyes, 18.2%) vision acuity without correction was 0.3 to 0.5 diopters, and in 2 more patients (2 eyes, 18.2%) it was from 0.1 to 0.3 diopters, that is, equal to the maximal vision acuity with the optimal correction. The refraction effect stabilized in 3-4 months. The highest refraction effect attained was 7.0 diopters. The patients were followed up for 3 months to 4 years. Clinical analysis of the operations confirmed the efficacy and reliability of the method and stability of refraction. Interlamellar sectorial keratoplasty is recommended for surgical correction of astigmatism.

  2. Correction of gene expression data

    DEFF Research Database (Denmark)

    Darbani Shirvanehdeh, Behrooz; Stewart, C. Neal, Jr.; Noeparvar, Shahin

    2014-01-01

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies....... For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce...

  3. Derivative pricing based on local utility maximization

    OpenAIRE

    Jan Kallsen

    2002-01-01

    This paper discusses a new approach to contingent claim valuation in general incomplete market models. We determine the neutral derivative price which occurs if investors maximize their local utility and if derivative demand and supply are balanced. We also introduce the sensitivity process of a contingent claim. This process quantifies the reliability of the neutral derivative price and it can be used to construct price bounds. Moreover, it allows to calibrate market models in order to be co...

  4. Violations of Grice`s Maxims in The Prince and the Pauper Movie

    OpenAIRE

    Antonius Waget

    2016-01-01

    Proper responses must be provided interlocutors to make onversation productive and meaningful. However, interlocutors do not always provide proper responses because they do not even know conversation rules. Grice coins 4 maxims as general rules to govern daily conversation. The maxims are Quantity, Quality, Relevance, and Manner. Conversation occurs in the real daily interaction also in arts including movies. The Prince and the Pauper movie is one of the media for human daily conversation. So...

  5. Is CP violation maximal

    International Nuclear Information System (INIS)

    Gronau, M.

    1984-01-01

    Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references

  6. Shareholder, stakeholder-owner or broad stakeholder maximization

    OpenAIRE

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating stakeholder-owner. Maximization of shareholder value is a special case of owner-maximization, and only under quite re-strictive assumptions shareholder maximization is larger or equal to stakeholder-owner...

  7. Maximal dissipation and well-posedness for the compressible Euler system

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard

    2014-01-01

    Roč. 16, č. 3 (2014), s. 447-461 ISSN 1422-6928 EU Projects: European Commission(XE) 320078 - MATHEF Keywords : maximal dissipation * compressible Euler system * weak solution Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2014 http://link.springer.com/article/10.1007/s00021-014-0163-8

  8. Nonrandom Intrafraction Target Motions and General Strategy for Correction of Spine Stereotactic Body Radiotherapy

    International Nuclear Information System (INIS)

    Ma Lijun; Sahgal, Arjun; Hossain, Sabbir; Chuang, Cynthia; Descovich, Martina; Huang, Kim; Gottschalk, Alex; Larson, David A.

    2009-01-01

    Purpose: To characterize nonrandom intrafraction target motions for spine stereotactic body radiotherapy and to develop a method of correction via image guidance. The dependence of target motions, as well as the effectiveness of the correction strategy for lesions of different locations within the spine, was analyzed. Methods and Materials: Intrafraction target motions for 64 targets in 64 patients treated with a total of 233 fractions were analyzed. Based on the target location, the cases were divided into three groups, i.e., cervical (n = 20 patients), thoracic (n = 20 patients), or lumbar-sacrum (n = 24 patients) lesions. For each case, time-lag autocorrelation analysis was performed for each degree of freedom of motion that included both translations (x, y, and z shifts) and rotations (roll, yaw, and pitch). A general correction strategy based on periodic interventions was derived to determine the time interval required between two adjacent interventions, to overcome the patient-specific target motions. Results: Nonrandom target motions were detected for 100% of cases regardless of target locations. Cervical spine targets were found to possess the highest incidence of nonrandom target motion compared with thoracic and lumbar-sacral lesions (p < 0.001). The average time needed to maintain the target motion to within 1 mm of translation or 1 deg. of rotational deviation was 5.5 min, 5.9 min, and 7.1 min for cervical, thoracic, and lumbar-sacrum locations, respectively (at 95% confidence level). Conclusions: A high incidence of nonrandom intrafraction target motions was found for spine stereotactic body radiotherapy treatments. Periodic interventions at approximately every 5 minutes or less were needed to overcome such motions.

  9. Maximal violation of Bell's inequalities for algebras of observables in tangent spacetime regions

    International Nuclear Information System (INIS)

    Summers, S.J.; Werner, R.

    1988-01-01

    We continue our study of Bell's inequalities and quantum field theory. It is shown in considerably broader generality than in our previous work that algebras of local observables corresponding to complementary wedge regions maximally violate Bell's inequality in all normal states. Pairs of commuting von Neumann algebras that maximally violate Bell's inequalities in all normal states are characterized. Algebras of local observables corresponding to tangent double cones are shown to maximally violate Bell's inequalities in all normal states in dilatation-invariant theories, in free quantum field models, and in a class of interacting models. Further, it is proven that such algebras are not split in any theory with an ultraviolet scaling limit

  10. Isochronicity correction in the CR storage ring

    International Nuclear Information System (INIS)

    Litvinov, S.; Toprek, D.; Weick, H.; Dolinskii, A.

    2013-01-01

    A challenge for nuclear physics is to measure masses of exotic nuclei up to the limits of nuclear existence which are characterized by low production cross-sections and short half-lives. The large acceptance Collector Ring (CR) [1] at FAIR [2] tuned in the isochronous ion-optical mode offers unique possibilities for measuring short-lived and very exotic nuclides. However, in a ring designed for maximal acceptance, many factors limit the resolution. One point is a limit in time resolution inversely proportional to the transverse emittance. But most of the time aberrations can be corrected and others become small for large number of turns. We show the relations of the time correction to the corresponding transverse focusing and that the main correction for large emittance corresponds directly to the chromaticity correction for transverse focusing of the beam. With the help of Monte-Carlo simulations for the full acceptance we demonstrate how to correct the revolution times so that in principle resolutions of Δm/m=10 −6 can be achieved. In these calculations the influence of magnet inhomogeneities and extended fringe fields are considered and a calibration scheme also for ions with different mass-to-charge ratio is presented

  11. FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION

    Directory of Open Access Journals (Sweden)

    Rahmawati Sukmaningrum

    2017-04-01

    Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.

  12. The behavioral economics of consumer brand choice: patterns of reinforcement and utility maximization.

    Science.gov (United States)

    Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C

    2004-06-30

    Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize.

  13. Estimation of maximal oxygen uptake without exercise testing in Korean healthy adult workers.

    Science.gov (United States)

    Jang, Tae-Won; Park, Shin-Goo; Kim, Hyoung-Ryoul; Kim, Jung-Man; Hong, Young-Seoub; Kim, Byoung-Gwon

    2012-08-01

    Maximal oxygen uptake is generally accepted as the most valid and reliable index of cardiorespiratory fitness and functional aerobic capacity. The exercise test for measuring maximal oxygen uptake is unsuitable for screening tests in public heath examinations, because of the potential risks of exercise exertion and time demands. We designed this study to determine whether work-related physical activity is a potential predictor of maximal oxygen uptake, and to develop a maximal oxygen uptake equation using a non-exercise regression model for the cardiorespiratory fitness test in Korean adult workers. Study subjects were adult workers of small-sized companies in Korea. Subjects with history of disease such as hypertension, diabetes, asthma and angina were excluded. In total, 217 adult subjects (113 men of 21-55 years old and 104 women of 20-64 years old) were included. Self-report questionnaire survey was conducted on study subjects, and maximal oxygen uptake of each subject was measured with the exercise test. The statistical analysis was carried out to develop an equation for estimating maximal oxygen uptake. The predictors for estimating maximal oxygen uptake included age, gender, body mass index, smoking, leisure-time physical activity and the factors representing work-related physical activity. The work-related physical activity was identified to be a predictor of maximal oxygen uptake. Moreover, the equation showed high validity according to the statistical analysis. The equation for estimating maximal oxygen uptake developed in the present study could be used as a screening test for assessing cardiorespiratory fitness in Korean adult workers.

  14. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    Directory of Open Access Journals (Sweden)

    Desak Putu Eka Pratiwi

    2015-07-01

    Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.

  15. Finding Maximal Quasiperiodicities in Strings

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Pedersen, Christian N. S.

    2000-01-01

    of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....

  16. Iteration of ultrasound aberration correction methods

    Science.gov (United States)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  17. Violations of Grice`s Maxims in The Prince and the Pauper Movie

    Directory of Open Access Journals (Sweden)

    Antonius Waget

    2016-12-01

    Full Text Available Proper responses must be provided interlocutors to make onversation productive and meaningful. However, interlocutors do not always provide proper responses because they do not even know conversation rules. Grice coins 4 maxims as general rules to govern daily conversation. The maxims are Quantity, Quality, Relevance, and Manner. Conversation occurs in the real daily interaction also in arts including movies. The Prince and the Pauper movie is one of the media for human daily conversation. Some parts of the movie contains violations of Grice`s maxims by the characters. Based on this background, the writer intends to explore violations ofGrice’s maxims in the movie and analyze the purposes of the violations. To achieve these objectives, the writer formulates two research problems: (1 Which of Grice`s maxims are violated by the addressees in The Prince and the Pauper movie? (2 For what purposes do the addressees violate the maxims? The base of this research is a movie script as document. Thus, the writer uses document analysis as the method of this research. Grounded on the analysis, the writer finds that the characters, especially Prince, Tom Canty, King, and the Earl of Hertford in the movie dialogues violate the four of Grice`s maxims. When failing to provide sufficient information, telling lie to their addressers, providing irrelevant glosses, and failing to be true, brief, univocal, and orderly, they respectively violate maxims of Quantity, Quality, Relevance, and Manner. Moreover, the writer finds that the characters violate the maxims in order to mislead the counterparts, be polite, save face, avoid discussion, and communicate self-interest.   DOI: https://doi.org/10.24071/llt.2015.180101

  18. General ion-optical correction element

    International Nuclear Information System (INIS)

    Ferguson, H.D.; Spencer, J.E.; Halbach, K.

    1975-07-01

    A general purpose type of multipole magnet is described which provides some unique advantages. It produces a very uniform dipole field which can be rotated about the longitudinal axis of the magnet. Higher order multipoles can also be rotated and can be excited simultaneously without the use of independent coils. A magnet having octupole geometry was built and shown to verify the basic ideas

  19. Shareholder, stakeholder-owner or broad stakeholder maximization

    DEFF Research Database (Denmark)

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating...... including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... not traded on the mar-ket, and therefore there is no possibility for practical application. Broad stakeholder maximization instead in practical applications becomes satisfying certain stakeholder demands, so that the practical application will be stakeholder-owner maximization un-der constraints defined...

  20. On the maximal superalgebras of supersymmetric backgrounds

    International Nuclear Information System (INIS)

    Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan

    2009-01-01

    In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.

  1. Linear maps preserving maximal deviation and the Jordan structure of quantum systems

    International Nuclear Information System (INIS)

    Hamhalter, Jan

    2012-01-01

    In the algebraic approach to quantum theory, a quantum observable is given by an element of a Jordan algebra and a state of the system is modelled by a normalized positive functional on the underlying algebra. Maximal deviation of a quantum observable is the largest statistical deviation one can obtain in a particular state of the system. The main result of the paper shows that each linear bijective transformation between JBW algebras preserving maximal deviations is formed by a Jordan isomorphism or a minus Jordan isomorphism perturbed by a linear functional multiple of an identity. It shows that only one numerical statistical characteristic has the power to determine the Jordan algebraic structure completely. As a consequence, we obtain that only very special maps can preserve the diameter of the spectra of elements. Nonlinear maps preserving the pseudometric given by maximal deviation are also described. The results generalize hitherto known theorems on preservers of maximal deviation in the case of self-adjoint parts of von Neumann algebras proved by Molnár.

  2. A Linear Time Algorithm for the k Maximal Sums Problem

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2007-01-01

     k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all......Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....

  3. Growth-Maximizing Public Debt under Changing Demographics

    DEFF Research Database (Denmark)

    Bokan, Nikola; Hougaard Jensen, Svend E.; Hallett, Andrew Hughes

    2016-01-01

    This paper develops an overlapping-generations model to study the growth-maximizing level of public debt under conditions of demograhic change. It is shown that the optimal debt level depends on a positive marginal productivity of public capital. In general, it also depends on the demographic par...... will have to adjust its fiscal plans to accommodate those changes, most likely downward, if growth is to be preserved. An advantage of this model is that it allows us to determine in advance the way in which fiscal policies need to adjust as demographic parameters change....

  4. A GENERALIZED NON-LINEAR METHOD FOR DISTORTION CORRECTION AND TOP-DOWN VIEW CONVERSION OF FISH EYE IMAGES

    Directory of Open Access Journals (Sweden)

    Vivek Singh Bawa

    2017-06-01

    Full Text Available Advanced driver assistance systems (ADAS have been developed to automate and modify vehicles for safety and better driving experience. Among all computer vision modules in ADAS, 360-degree surround view generation of immediate surroundings of the vehicle is very important, due to application in on-road traffic assistance, parking assistance etc. This paper presents a novel algorithm for fast and computationally efficient transformation of input fisheye images into required top down view. This paper also presents a generalized framework for generating top down view of images captured by cameras with fish-eye lenses mounted on vehicles, irrespective of pitch or tilt angle. The proposed approach comprises of two major steps, viz. correcting the fish-eye lens images to rectilinear images, and generating top-view perspective of the corrected images. The images captured by the fish-eye lens possess barrel distortion, for which a nonlinear and non-iterative method is used. Thereafter, homography is used to obtain top-down view of corrected images. This paper also targets to develop surroundings of the vehicle for wider distortion less field of view and camera perspective independent top down view, with minimum computation cost which is essential due to limited computation power on vehicles.

  5. Maximally Symmetric Composite Higgs Models.

    Science.gov (United States)

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  6. COD correction for laser cooling at S-LSR

    International Nuclear Information System (INIS)

    Souda, Hikaru; Fujimoto, Shinji; Tongu, Hiromu; Shirai, Toshiyuki; Tanabe, Mikio; Ishikawa, Takehiro; Nakao, Masao; Ikegami, Masahiro; Wakita, Akihisa; Iwata, Soma; Fujimoto, Tetsuya; Takeuchi, Takeshi; Noda, Koji; Noda, Akira

    2008-01-01

    A closed orbit is corrected for single-turn injection to perform laser cooling experiments of 40 keV 24 Mg + beam at the small laser-equipped storage ring (S-LSR). Closed orbit distortion (COD) corrections have been carried out using a downhill simplex method, and CODs of less than ±0.5mm have been achieved throughout the whole circumference. The injection orbit and the CODs are optimized to pass through the two aperture holes in the alignment targets located in the laser cooling section with an algorithm to maximize beam lifetime. The CODs at the aperture holes are reduced to be less than ±0.2mm, assuring an overlap between the laser and the 24 Mg + ion beam.

  7. Maximal quantum Fisher information matrix

    International Nuclear Information System (INIS)

    Chen, Yu; Yuan, Haidong

    2017-01-01

    We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)

  8. Understanding Violations of Gricean Maxims in Preschoolers and Adults

    Directory of Open Access Journals (Sweden)

    Mako eOkanda

    2015-07-01

    Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.

  9. Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth

    Science.gov (United States)

    Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana

    2017-10-01

    In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.

  10. On generally covariant quantum field theory and generalized causal and dynamical structures

    International Nuclear Information System (INIS)

    Bannier, U.

    1988-01-01

    We give an example of a generally covariant quasilocal algebra associated with the massive free field. Maximal, two-sided ideals of this algebra are algebraic representatives of external metric fields. In some sense, this algebra may be regarded as a concrete realization of Ekstein's ideas of presymmetry in quantum field theory. Using ideas from our example and from usual algebraic quantum field theory, we discuss a generalized scheme, in which maximal ideals are viewed as algebraic representatives of dynamical equations or Lagrangians. The considered frame is no quantum gravity, but may lead to further insight into the relation between quantum theory and space-time geometry. (orig.)

  11. Heterotic α ’-corrections in Double Field Theory

    OpenAIRE

    Bedoya, OscarInstituto de Astronomía y Física del Espacio (CONICET-UBA), Ciudad Universitaria, Buenos Aires, Argentina; Marqués, Diego(Instituto de Astronomía y Física del Espacio (CONICET-UBA), Ciudad Universitaria, Buenos Aires, Argentina); Núñez, Carmen(Instituto de Astronomía y Física del Espacio (CONICET-UBA), Ciudad Universitaria, Buenos Aires, Argentina)

    2014-01-01

    We extend the generalized flux formulation of Double Field Theory to include all the first order bosonic contributions to the α′ expansion of the heterotic string low energy effective theory. The generalized tangent space and duality group are enhanced by α′ corrections, and the gauge symmetries are generated by the usual (gauged) generalized Lie derivative in the extended space. The generalized frame receives derivative corrections through the spin connection with torsion, which is incorpora...

  12. Maximal entanglement of two spinor Bose-Einstein condensates

    OpenAIRE

    Jack, Michael W.; Yamashita, Makoto

    2005-01-01

    Starting with two weakly-coupled anti-ferromagnetic spinor condensates, we show that by changing the sign of the coefficient of the spin interaction, $U_{2}$, via an optically-induced Feshbach resonance one can create an entangled state consisting of two anti-correlated ferromagnetic condensates. This state is maximally entangled and a generalization of the Bell state from two anti-correlated spin-1/2 particles to two anti-correlated spin$-N/2$ atomic samples, where $N$ is the total number of...

  13. Quantum corrections to potential energy surfaces and their influence on barriers

    International Nuclear Information System (INIS)

    Reinhard, P.G.; Goeke, K.W.; Bonn Univ.

    1980-01-01

    A microscopic theory suitable for the description of fission processes and other large-amplitude collective phenomena is presented. The approach makes use of an optimal collective path, which is constructed by means of adiabatic time-dependent Hartree-Fock (TDHF) techniques as to show maximal de-coupling of collective and non-collective degrees of freedom. Although this involves a classical concept, the theory fully incorporates quantum effects associated with extracting a collective Schroedinger equation from adiabatic time-dependent Hartree-Fock theories (ATDHF). The quantum corrections are discussed extensively, and calculations in the two-centre shell model show, e.g. that they reduce the second barrier by 2 MeV and the life-time by a factor of 10 -7 . The relationships of the presented quantized ATDHF approach to the random-phase approximation (RPA) and a generalized dynamic generator co-ordinate method are investigated. For the construction of the optimal fission path, simple step-by-step methods are suggested. (author)

  14. Correction: General optimization procedure towards the design of a new family of minimal parameter spin-component-scaled double-hybrid density functional theory.

    Science.gov (United States)

    Roch, Loïc M; Baldridge, Kim K

    2018-02-07

    Correction for 'General optimization procedure towards the design of a new family of minimal parameter spin-component-scaled double-hybrid density functional theory' by Loïc M. Roch and Kim K. Baldridge, Phys. Chem. Chem. Phys., 2017, 19, 26191-26200.

  15. Segmentation-free empirical beam hardening correction for CT

    Energy Technology Data Exchange (ETDEWEB)

    Schüller, Sören; Sawall, Stefan [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich [Sirona Dental Systems GmbH, Fabrikstraße 31, 64625 Bensheim (Germany); Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz.de [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the

  16. Segmentation-free empirical beam hardening correction for CT.

    Science.gov (United States)

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc

    2015-02-01

    The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed

  17. Use of a General Magnetotherapy for Correction of the Lipoperoxidation Values in Patients with a Chronic Cervicitis Combined with a Chronic Adnexitis

    OpenAIRE

    Fatalieva G.G.; Chandra D'Mello R.

    2010-01-01

    Aim of investigation is detection of the lipid peroxidation (LPO) state in patients with a chronic cervicitis combined with a chronic adnexitis and possibility of a general magnetotherapy use in its correction. Materials and Methods. 80 patients with a chronic nonspecific cervicitis combined with a chronic adnexitis are examined. A general magnetotherapy was used in one of the groups together with antibacterial therapy. Results. It is established, that a normalization of the disease c...

  18. Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization.

    Science.gov (United States)

    Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu

    2016-01-01

    In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications.

  19. Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization.

    Directory of Open Access Journals (Sweden)

    Huan-Kai Peng

    Full Text Available In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications.

  20. Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization

    Science.gov (United States)

    Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu

    2016-01-01

    In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications. PMID:26771830

  1. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Publisher Correction: Evolutionary adaptations to new environments generally reverse plastic phenotypic changes.

    Science.gov (United States)

    Ho, Wei-Chin; Zhang, Jianzhi

    2018-02-21

    The originally published HTML version of this Article contained errors in the three equations in the Methods sub-section 'Metabolic network analysis', whereby the Greek letter eta (η) was inadvertently used in place of beta (β) during the production process. These errors have now been corrected in the HTML version of the Article; the PDF was correct at the time of publication.

  3. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  4. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  5. Third-order nonlinear differential operators preserving invariant subspaces of maximal dimension

    International Nuclear Information System (INIS)

    Qu Gai-Zhu; Zhang Shun-Li; Li Yao-Long

    2014-01-01

    In this paper, third-order nonlinear differential operators are studied. It is shown that they are quadratic forms when they preserve invariant subspaces of maximal dimension. A complete description of third-order quadratic operators with constant coefficients is obtained. One example is given to derive special solutions for evolution equations with third-order quadratic operators. (general)

  6. Generation and Identification of Ordinary Differential Equations of Maximal Symmetry Algebra

    Directory of Open Access Journals (Sweden)

    J. C. Ndogmo

    2016-01-01

    Full Text Available An effective method for generating linear ordinary differential equations of maximal symmetry in their most general form is found, and an explicit expression for the point transformation reducing the equation to its canonical form is obtained. New expressions for the general solution are also found, as well as several identification and other results and a direct proof of the fact that a linear ordinary differential equation is iterative if and only if it is reducible to the canonical form by a point transformation. New classes of solvable equations parameterized by an arbitrary function are also found, together with simple algebraic expressions for the corresponding general solution.

  7. Energy localization in maximally entangled two- and three-qubit phase space

    International Nuclear Information System (INIS)

    Pashaev, Oktay K; Gurkan, Zeynep N

    2012-01-01

    Motivated by the Möbius transformation for symmetric points under the generalized circle in the complex plane, the system of symmetric spin coherent states corresponding to antipodal qubit states is introduced. In terms of these states, we construct the maximally entangled complete set of two-qubit coherent states, which in the limiting cases reduces to the Bell basis. A specific property of our symmetric coherent states is that they never become unentangled for any value of ψ from the complex plane. Entanglement quantifications of our states are given by the reduced density matrix and the concurrence determinant, and it is shown that our basis is maximally entangled. Universal one- and two-qubit gates in these new coherent state basis are calculated. As an application, we find the Q symbol of the XY Z model Hamiltonian operator H as an average energy function in maximally entangled two- and three-qubit phase space. It shows regular finite-energy localized structure with specific local extremum points. The concurrence and fidelity of quantum evolution with dimerization of double periodic patterns are given. (paper)

  8. A Theory of the Perturbed Consumer with General Budgets

    DEFF Research Database (Denmark)

    McFadden, Daniel L; Fosgerau, Mogens

    We consider demand systems for utility-maximizing consumers facing general budget constraints whose utilities are perturbed by additive linear shifts in marginal utilities. Budgets are required to be compact but are not required to be convex. We define demand generating functions (DGF) whose...... subgradients with respect to these perturbations are convex hulls of the utility-maximizing demands. We give necessary as well as sufficient conditions for DGF to be consistent with utility maximization, and establish under quite general conditions that utility-maximizing demands are almost everywhere single......-valued and smooth in their arguments. We also give sufficient conditions for integrability of perturbed demand. Our analysis provides a foundation for applications of consumer theory to problems with nonlinear budget constraints....

  9. Maximal Entanglement in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli

    2017-11-01

    Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.

  10. Superstring threshold corrections to Yukawa couplings

    International Nuclear Information System (INIS)

    Antoniadis, I.; Taylor, T.R.

    1992-12-01

    A general method of computing string corrections to the Kaehler metric and Yukawa couplings is developed at the one-loop level for a general compactification of the heterotic superstring theory. It also provides a direct determination of the so-called Green-Schwarz term. The matter metric has an infrared divergent part which reproduces the field-theoretical anomalous dimensions, and a moduli-dependent part which gives rise to threshold corrections in the physical Yukawa couplings. Explicit expressions are derived for symmetric orbifold compactifications. (author). 20 refs

  11. Maximal Inequalities for Dependent Random Variables

    DEFF Research Database (Denmark)

    Hoffmann-Jorgensen, Jorgen

    2016-01-01

    Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...

  12. An ethical justification of profit maximization

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2010-01-01

    In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...... utility) maximizing actions are ruled out, e.g., by behavioural norms or formal institutions....

  13. Scalable improvement of SPME multipolar electrostatics in anisotropic polarizable molecular mechanics using a general short-range penetration correction up to quadrupoles.

    Science.gov (United States)

    Narth, Christophe; Lagardère, Louis; Polack, Étienne; Gresh, Nohad; Wang, Qiantao; Bell, David R; Rackers, Joshua A; Ponder, Jay W; Ren, Pengyu Y; Piquemal, Jean-Philip

    2016-02-15

    We propose a general coupling of the Smooth Particle Mesh Ewald SPME approach for distributed multipoles to a short-range charge penetration correction modifying the charge-charge, charge-dipole and charge-quadrupole energies. Such an approach significantly improves electrostatics when compared to ab initio values and has been calibrated on Symmetry-Adapted Perturbation Theory reference data. Various neutral molecular dimers have been tested and results on the complexes of mono- and divalent cations with a water ligand are also provided. Transferability of the correction is adressed in the context of the implementation of the AMOEBA and SIBFA polarizable force fields in the TINKER-HP software. As the choices of the multipolar distribution are discussed, conclusions are drawn for the future penetration-corrected polarizable force fields highlighting the mandatory need of non-spurious procedures for the obtention of well balanced and physically meaningful distributed moments. Finally, scalability and parallelism of the short-range corrected SPME approach are addressed, demonstrating that the damping function is computationally affordable and accurate for molecular dynamics simulations of complex bio- or bioinorganic systems in periodic boundary conditions. Copyright © 2016 Wiley Periodicals, Inc.

  14. 4 CFR 28.131 - Corrective action proceedings.

    Science.gov (United States)

    2010-01-01

    ... Accounts GOVERNMENT ACCOUNTABILITY OFFICE GENERAL PROCEDURES GOVERNMENT ACCOUNTABILITY OFFICE PERSONNEL APPEALS BOARD; PROCEDURES APPLICABLE TO CLAIMS CONCERNING EMPLOYMENT PRACTICES AT THE GOVERNMENT ACCOUNTABILITY OFFICE Corrective Action, Disciplinary and Stay Proceedings § 28.131 Corrective action proceedings...

  15. Maximization techniques for oilfield development profits

    International Nuclear Information System (INIS)

    Lerche, I.

    1999-01-01

    In 1981 Nind provided a quantitative procedure for estimating the optimum number of development wells to emplace on an oilfield to maximize profit. Nind's treatment assumed that there was a steady selling price, that all wells were placed in production simultaneously, and that each well's production profile was identical and a simple exponential decline with time. This paper lifts these restrictions to allow for price fluctuations, variable with time emplacement of wells, and production rates that are more in line with actual production records than is a simple exponential decline curve. As a consequence, it is possible to design production rate strategies, correlated with price fluctuations, so as to maximize the present-day worth of a field. For price fluctuations that occur on a time-scale rapid compared to inflation rates it is appropriate to have production rates correlate directly with such price fluctuations. The same strategy does not apply for price fluctuations occurring on a time-scale long compared to inflation rates where, for small amplitudes in the price fluctuations, it is best to sell as much product as early as possible to overcome inflation factors, while for large amplitude fluctuations the best strategy is to sell product as early as possible but to do so mainly on price upswings. Examples are provided to show how these generalizations of Nind's (1981) formula change the complexion of oilfield development optimization. (author)

  16. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  17. Does mental exertion alter maximal muscle activation?

    Directory of Open Access Journals (Sweden)

    Vianney eRozand

    2014-09-01

    Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.

  18. Coding for Parallel Links to Maximize the Expected Value of Decodable Messages

    Science.gov (United States)

    Klimesh, Matthew A.; Chang, Christopher S.

    2011-01-01

    When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from

  19. On maximal surfaces in asymptotically flat space-times

    International Nuclear Information System (INIS)

    Bartnik, R.; Chrusciel, P.T.; O Murchadha, N.

    1990-01-01

    Existence of maximal and 'almost maximal' hypersurfaces in asymptotically flat space-times is established under boundary conditions weaker than those considered previously. We show in particular that every vacuum evolution of asymptotically flat data for Einstein equations can be foliated by slices maximal outside a spatially compact set and that every (strictly) stationary asymptotically flat space-time can be foliated by maximal hypersurfaces. Amongst other uniqueness results, we show that maximal hypersurface can be used to 'partially fix' an asymptotic Poincare group. (orig.)

  20. Insulin resistance and maximal oxygen uptake

    DEFF Research Database (Denmark)

    Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans

    2003-01-01

    BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...

  1. ICT: isotope correction toolbox.

    Science.gov (United States)

    Jungreuthmayer, Christian; Neubauer, Stefan; Mairinger, Teresa; Zanghellini, Jürgen; Hann, Stephan

    2016-01-01

    Isotope tracer experiments are an invaluable technique to analyze and study the metabolism of biological systems. However, isotope labeling experiments are often affected by naturally abundant isotopes especially in cases where mass spectrometric methods make use of derivatization. The correction of these additive interferences--in particular for complex isotopic systems--is numerically challenging and still an emerging field of research. When positional information is generated via collision-induced dissociation, even more complex calculations for isotopic interference correction are necessary. So far, no freely available tools can handle tandem mass spectrometry data. We present isotope correction toolbox, a program that corrects tandem mass isotopomer data from tandem mass spectrometry experiments. Isotope correction toolbox is written in the multi-platform programming language Perl and, therefore, can be used on all commonly available computer platforms. Source code and documentation can be freely obtained under the Artistic License or the GNU General Public License from: https://github.com/jungreuc/isotope_correction_toolbox/ {christian.jungreuthmayer@boku.ac.at,juergen.zanghellini@boku.ac.at} Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. POLITENESS MAXIM OF MAIN CHARACTER IN SECRET FORGIVEN

    Directory of Open Access Journals (Sweden)

    Sang Ayu Isnu Maharani

    2017-06-01

    Full Text Available Maxim of Politeness is an interesting subject to be discussed, since politeness has been criticized from our childhood. We are obliques to be polite to anyone either in speaking or in acting. Somehow we are manage to show politeness in our spoken expression though our intention might be not so polite. For example we must appriciate others opinion although we feel objection toward the opinion. In this article the analysis of politeness is based on maxim proposes by Leech. He proposed six types of politeness maxim. The discussion shows that the main character (Kristen and Kami use all types of maxim in their conversation. The most commonly used are approbation maxim and agreement maxim

  3. Generalized EMV-Effect Algebras

    Science.gov (United States)

    Borzooei, R. A.; Dvurečenskij, A.; Sharafi, A. H.

    2018-04-01

    Recently in Dvurečenskij and Zahiri (2017), new algebraic structures, called EMV-algebras which generalize both MV-algebras and generalized Boolean algebras, were introduced. We present equivalent conditions for EMV-algebras. In addition, we define a partial algebraic structure, called a generalized EMV-effect algebra, which is close to generalized MV-effect algebras. Finally, we show that every generalized EMV-effect algebra is either an MV-effect algebra or can be embedded into an MV-effect algebra as a maximal ideal.

  4. Crystallographic cut that maximizes of the birefringence in photorefractive crystals

    OpenAIRE

    Rueda-Parada, Jorge Enrique

    2017-01-01

    The electro-optical birefringence effect depends on the crystal type, cut crystal, applied electric field and the incidence direction of light on the principal crystal faces. It is presented a study of maximizing the birefringence in photorefractive crystals of cubic crystallographic symmetry, in terms of these three parameters. General analytical expressions for the birefringence were obtained, from which birefringence can be established for any type of cut. A new crystallographic cut was en...

  5. General linear-optical quantum state generation scheme: Applications to maximally path-entangled states

    International Nuclear Information System (INIS)

    VanMeter, N. M.; Lougovski, P.; Dowling, Jonathan P.; Uskov, D. B.; Kieling, K.; Eisert, J.

    2007-01-01

    We introduce schemes for linear-optical quantum state generation. A quantum state generator is a device that prepares a desired quantum state using product inputs from photon sources, linear-optical networks, and postselection using photon counters. We show that this device can be concisely described in terms of polynomial equations and unitary constraints. We illustrate the power of this language by applying the Groebner-basis technique along with the notion of vacuum extensions to solve the problem of how to construct a quantum state generator analytically for any desired state, and use methods of convex optimization to identify bounds to success probabilities. In particular, we disprove a conjecture concerning the preparation of the maximally path-entangled |n,0>+|0,n> (NOON) state by providing a counterexample using these methods, and we derive a new upper bound on the resources required for NOON-state generation

  6. The properties and interrelationships of various force-time parameters during maximal repeated rhythmic grip.

    Science.gov (United States)

    Nakada, Masakatsu; Demura, Shinichi; Yamaji, Shunsuke

    2007-01-01

    The purpose of this study was to examine the properties and interrelationships of various force-time parameters including the inflection point for the rate of decline in force during a maximal repeated rhythmic grip. Fifteen healthy males (age M=21.5, SD=2.1 yr, height M=172.4, SD=5.7 cm, body mass M=68.2, SD=9.2 kg) participated in this study. Subjects performed a maximal repeated rhythmic grip with maximal effort with a target frequency of 30 grip.min(-1) for 6 min. The force value decreased linearly and markedly until about 70% of maximal strength for about 55 s after the onset of a maximal repeated rhythmic grip, and then decreased moderately. Because all parameters showed fair or good correlations between 3 min and 6 min, they are considered to be able to sufficiently evaluate muscle endurance for 3 min instead of 6 min. However, there were significant differences between 3 min and 6 min in the integrated area, the final force, the rate of the decrement constant (k) fitting the force decreasing data to y=ae(-kx)+b and the force of maximal difference between the force and a straight line from peak force to the final force. Their parameters may vary generally by the length of a steady state, namely, a measurement time. The final force value before finishing and the rate of the decrement constant (k) reflect the latter phase during a maximal repeated rhythmic grip. Although many parameters show relatively high mutual relationships, the rate constant (k) shows relatively low correlations with other parameters. We inferred that decreasing the time until 80% of maximal strength and the amount of the decrement force for the first 1 min reflect a linear decrease in the initial phase.

  7. The one-sided Ap conditions and local maximal operator

    Czech Academy of Sciences Publication Activity Database

    Bernardis, A.L.; Gogatishvili, Amiran; Martin-Reyes, F. J.; Ortega Salvador, P.; Pick, L.

    Roč. 55, č. 1 ( 2012 ), s. 79-104 ISSN 0013-0915 R&D Projects: GA ČR GA201/08/0383; GA ČR GA201/05/2033 Institutional research plan: CEZ:AV0Z10190503 Keywords : one-sided Ap conditions * one-sided local maximal operator * quasi-Banach function spaces * variable-exponent Lebesgue spaces Subject RIV: BA - General Mathematics Impact factor: 0.561, year: 2012 http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=8477114&fulltextType=RA&fileId=S0013091510000635

  8. Natural maximal νμ-ντ mixing

    International Nuclear Information System (INIS)

    Wetterich, C.

    1999-01-01

    The naturalness of maximal mixing between myon- and tau-neutrinos is investigated. A spontaneously broken nonabelian generation symmetry can explain a small parameter which governs the deviation from maximal mixing. In many cases all three neutrino masses are almost degenerate. Maximal ν μ -ν τ -mixing suggests that the leading contribution to the light neutrino masses arises from the expectation value of a heavy weak triplet rather than from the seesaw mechanism. In this scenario the deviation from maximal mixing is predicted to be less than about 1%. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  9. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  10. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  11. Utility maximization and mode of payment

    NARCIS (Netherlands)

    Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.

    2000-01-01

    The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:

  12. Quantum corrections to Schwarzschild black hole

    Energy Technology Data Exchange (ETDEWEB)

    Calmet, Xavier; El-Menoufi, Basem Kamal [University of Sussex, Department of Physics and Astronomy, Brighton (United Kingdom)

    2017-04-15

    Using effective field theory techniques, we compute quantum corrections to spherically symmetric solutions of Einstein's gravity and focus in particular on the Schwarzschild black hole. Quantum modifications are covariantly encoded in a non-local effective action. We work to quadratic order in curvatures simultaneously taking local and non-local corrections into account. Looking for solutions perturbatively close to that of classical general relativity, we find that an eternal Schwarzschild black hole remains a solution and receives no quantum corrections up to this order in the curvature expansion. In contrast, the field of a massive star receives corrections which are fully determined by the effective field theory. (orig.)

  13. Corrections for criterion reliability in validity generalization: The consistency of Hermes, the utility of Midas

    Directory of Open Access Journals (Sweden)

    Jesús F. Salgado

    2016-04-01

    Full Text Available There is criticism in the literature about the use of interrater coefficients to correct for criterion reliability in validity generalization (VG studies and disputing whether .52 is an accurate and non-dubious estimate of interrater reliability of overall job performance (OJP ratings. We present a second-order meta-analysis of three independent meta-analytic studies of the interrater reliability of job performance ratings and make a number of comments and reflections on LeBreton et al.s paper. The results of our meta-analysis indicate that the interrater reliability for a single rater is .52 (k = 66, N = 18,582, SD = .105. Our main conclusions are: (a the value of .52 is an accurate estimate of the interrater reliability of overall job performance for a single rater; (b it is not reasonable to conclude that past VG studies that used .52 as the criterion reliability value have a less than secure statistical foundation; (c based on interrater reliability, test-retest reliability, and coefficient alpha, supervisor ratings are a useful and appropriate measure of job performance and can be confidently used as a criterion; (d validity correction for criterion unreliability has been unanimously recommended by "classical" psychometricians and I/O psychologists as the proper way to estimate predictor validity, and is still recommended at present; (e the substantive contribution of VG procedures to inform HRM practices in organizations should not be lost in these technical points of debate.

  14. Development and Application of Tools for MRI Analysis - A Study on the Effects of Exercise in Patients with Alzheimer's Disease and Generative Models for Bias Field Correction in MR Brain Imaging

    DEFF Research Database (Denmark)

    Larsen, Christian Thode

    in several cognitive performance measures, including mental speed, attention and verbal uency. MRI suffers from an image artifact often referred to as the "bias field”. This effect complicates automatized analysis of the images. For this reason, bias field correction is typical an early preprocessing step...... as a "histogram sharpening” method, actually employs an underlying generative model, and that the bias field is estimated using an algorithm that is identical to generalized expectation maximization, but relies on heuristic parameter updates. The thesis progresses to present a new generative model...

  15. Generalized second law of thermodynamics for non-canonical scalar field model with corrected-entropy

    International Nuclear Information System (INIS)

    Das, Sudipta; Mamon, Abdulla Al; Debnath, Ujjal

    2015-01-01

    In this work, we have considered a non-canonical scalar field dark energy model in the framework of flat FRW background. It has also been assumed that the dark matter sector interacts with the non-canonical dark energy sector through some interaction term. Using the solutions for this interacting non-canonical scalar field dark energy model, we have investigated the validity of generalized second law (GSL) of thermodynamics in various scenarios using first law and area law of thermodynamics. For this purpose, we have assumed two types of horizons viz apparent horizon and event horizon for the universe and using first law of thermodynamics, we have examined the validity of GSL on both apparent and event horizons. Next, we have considered two types of entropy-corrections on apparent and event horizons. Using the modified area law, we have examined the validity of GSL of thermodynamics on apparent and event horizons under some restrictions of model parameters. (orig.)

  16. Maximization of energy in the output of a linear system

    International Nuclear Information System (INIS)

    Dudley, D.G.

    1976-01-01

    A time-limited signal which, when passed through a linear system, maximizes the total output energy is considered. Previous work has shown that the solution is given by the eigenfunction associated with the maximum eigenvalue in a Hilbert-Schmidt integral equation. Analytical results are available for the case where the transfer function is a low-pass filter. This work is extended by obtaining a numerical solution to the integral equation which allows results for reasonably general transfer functions

  17. Activity versus outcome maximization in time management.

    Science.gov (United States)

    Malkoc, Selin A; Tonietto, Gabriela N

    2018-04-30

    Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.

  18. Software development with C++ maximizing reuse with object technology

    CERN Document Server

    Nielsen, Kjell

    2014-01-01

    Software Development with C++: Maximizing Reuse with Object Technology is about software development and object-oriented technology (OT), with applications implemented in C++. The basis for any software development project of complex systems is the process, rather than an individual method, which simply supports the overall process. This book is not intended as a general, all-encompassing treatise on OT. The intent is to provide practical information that is directly applicable to a development project. Explicit guidelines are offered for the infusion of OT into the various development phases.

  19. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  20. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  1. HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL

    CERN Document Server

    HR Division

    2000-01-01

    Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...

  2. GUP parameter from quantum corrections to the Newtonian potential

    Energy Technology Data Exchange (ETDEWEB)

    Scardigli, Fabio, E-mail: fabio@phys.ntu.edu.tw [Dipartimento di Matematica, Politecnico di Milano, Piazza L. da Vinci 32, 20133 Milano (Italy); Department of Applied Mathematics, University of Waterloo, Ontario N2L 3G1 (Canada); Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan); Lambiase, Gaetano, E-mail: lambiase@sa.infn.it [Dipartimento di Fisica “E.R. Caianiello”, Universita' di Salerno, I-84084 Fisciano (Italy); INFN – Gruppo Collegato di Salerno (Italy); Vagenas, Elias C., E-mail: elias.vagenas@ku.edu.kw [Theoretical Physics Group, Department of Physics, Kuwait University, P.O. Box 5969, Safat 13060 (Kuwait)

    2017-04-10

    We propose a technique to compute the deformation parameter of the generalized uncertainty principle by using the leading quantum corrections to the Newtonian potential. We just assume General Relativity as theory of Gravitation, and the thermal nature of the GUP corrections to the Hawking spectrum. With these minimal assumptions our calculation gives, to first order, a specific numerical result. The physical meaning of this value is discussed, and compared with the previously obtained bounds on the generalized uncertainty principle deformation parameter.

  3. On the maximal diphoton width

    CERN Document Server

    Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo

    2016-01-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  4. General rigid motion correction for computed tomography imaging based on locally linear embedding

    Science.gov (United States)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  5. Quantum effects in non-maximally symmetric spaces

    International Nuclear Information System (INIS)

    Shen, T.C.

    1985-01-01

    Non-Maximally symmetric spaces provide a more general background to explore the relation between the geometry of the manifold and the quantum fields defined in the manifold than those with maximally symmetric spaces. A static Taub universe is used to study the effect of curvature anisotropy on the spontaneous symmetry breaking of a self-interacting scalar field. The one-loop effective potential on a λphi 4 field with arbitrary coupling xi is computed by zeta function regularization. For massless minimal coupled scalar fields, first order phase transitions can occur. Keeping the shape invariant but decreasing the curvature radius of the universe induces symmetry breaking. If the curvature radius is held constant, increasing deformation can restore the symmetry. Studies on the higher-dimensional Kaluza-Klein theories are also focused on the deformation effect. Using the dimensional regularization, the effective potential of the free scalar fields in M 4 x T/sup N/ and M 4 x (Taub) 3 spaces are obtained. The stability criterions for the static solutions of the self-consistent Einstein equations are derived. Stable solutions of the M 4 x S/sup N/ topology do not exist. With the Taub space as the internal space, the gauge coupling constants of SU(2), and U(1) can be determined geometrically. The weak angle is therefore predicted by geometry in this model

  6. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. GUP parameter from quantum corrections to the Newtonian potential

    Directory of Open Access Journals (Sweden)

    Fabio Scardigli

    2017-04-01

    Full Text Available We propose a technique to compute the deformation parameter of the generalized uncertainty principle by using the leading quantum corrections to the Newtonian potential. We just assume General Relativity as theory of Gravitation, and the thermal nature of the GUP corrections to the Hawking spectrum. With these minimal assumptions our calculation gives, to first order, a specific numerical result. The physical meaning of this value is discussed, and compared with the previously obtained bounds on the generalized uncertainty principle deformation parameter.

  8. Discussion of a Possible Corrected Black Hole Entropy

    Directory of Open Access Journals (Sweden)

    Miao He

    2018-01-01

    Full Text Available Einstein’s equation could be interpreted as the first law of thermodynamics near the spherically symmetric horizon. Through recalling the Einstein gravity with a more general static spherical symmetric metric, we find that the entropy would have a correction in Einstein gravity. By using this method, we investigate the Eddington-inspired Born-Infeld (EiBI gravity. Without matter field, we can also derive the first law in EiBI gravity. With an electromagnetic field, as the field equations have a more general spherically symmetric solution in EiBI gravity, we find that correction of the entropy could be generalized to EiBI gravity. Furthermore, we point out that the Einstein gravity and EiBI gravity might be equivalent on the event horizon. At last, under EiBI gravity with the electromagnetic field, a specific corrected entropy of black hole is given.

  9. Linking school - General stomatologist comprehensive family in correcting deleterious habits: thumb sucking, nail biting and bruxism, in primary school, through music therapy

    Directory of Open Access Journals (Sweden)

    Susana Barrios Piñera

    2010-09-01

    Full Text Available This investigation treat about an strategy for the correction of habits deforms that have an effect on the apprenticeship school, as soon as digital suction, eat up the nails and creak the teeth, resting on the musictherapy and the association action of the school, the family and the community, with direct attention by the General Integral Estomatology.

  10. Correction of gene expression data: Performance-dependency on inter-replicate and inter-treatment biases.

    Science.gov (United States)

    Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren

    2014-10-20

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  12. 37 CFR 1.85 - Corrections to drawings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Corrections to drawings. 1.85... COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES National Processing Provisions The Drawings § 1.85 Corrections to drawings. (a) A utility or plant application will not be placed on the files for examination...

  13. Error Correction for Non-Abelian Topological Quantum Computation

    Directory of Open Access Journals (Sweden)

    James R. Wootton

    2014-03-01

    Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.

  14. Genus one super-Green function revisited and superstring amplitudes with non-maximal supersymmetry

    International Nuclear Information System (INIS)

    Itoyama, H.; Yano, Kohei

    2016-01-01

    We reexamine genus one super-Green functions with general boundary conditions twisted by (α,β) for (σ,τ) directions in the eigenmode expansion and derive expressions as infinite series of hypergeometric functions. Using these, we compute one-loop superstring amplitudes with non-maximal supersymmetry, taking the example of massless vector emissions of open string type I Z 2 orbifold

  15. Dark energy homogeneity in general relativity: Are we applying it correctly?

    Science.gov (United States)

    Duniya, Didam G. A.

    2016-04-01

    Thus far, there does not appear to be an agreed (or adequate) definition of homogeneous dark energy (DE). This paper seeks to define a valid, adequate homogeneity condition for DE. Firstly, it is shown that as long as w_x ≠ -1, DE must have perturbations. It is then argued, independent of w_x, that a correct definition of homogeneous DE is one whose density perturbation vanishes in comoving gauge: and hence, in the DE rest frame. Using phenomenological DE, the consequence of this approach is then investigated in the observed galaxy power spectrum—with the power spectrum being normalized on small scales, at the present epoch z=0. It is found that for high magnification bias, relativistic corrections in the galaxy power spectrum are able to distinguish the concordance model from both a homogeneous DE and a clustering DE—on super-horizon scales.

  16. M-Theory and Maximally Supersymmetric Gauge Theories

    CERN Document Server

    Lambert, Neil

    2012-01-01

    In this informal review for non-specalists we discuss the construction of maximally supersymmetric gauge theories that arise on the worldvolumes branes in String Theory and M-Theory. Particular focus is made on the relatively recent construction of M2-brane worldvolume theories. In a formal sense, the existence of these quantum field theories can be viewed as predictions of M-Theory. Their construction is therefore a reinforcement of the ideas underlying String Theory and M-Theory. We also briefly discuss the six-dimensional conformal field theory that is expected to arise on M5-branes. The construction of this theory is not only an important open problem for M-Theory but also a significant challenge to our current understanding of quantum field theory more generally.

  17. Quantum corrections to the thermodynamics of Schwarzschild-Tangherlini black hole and the generalized uncertainty principle

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Z.W.; Zu, X.T. [University of Electronic Science and Technology of China, School of Physical Electronics, Chengdu (China); Li, H.L. [University of Electronic Science and Technology of China, School of Physical Electronics, Chengdu (China); Shenyang Normal University, College of Physics Science and Technology, Shenyang (China); Yang, S.Z. [China West Normal University, Physics and Space Science College, Nanchong (China)

    2016-04-15

    We investigate the thermodynamics of Schwarzschild-Tangherlini black hole in the context of the generalized uncertainty principle (GUP). The corrections to the Hawking temperature, entropy and the heat capacity are obtained via the modified Hamilton-Jacobi equation. These modifications show that the GUP changes the evolution of the Schwarzschild-Tangherlini black hole. Specially, the GUP effect becomes susceptible when the radius or mass of the black hole approaches the order of Planck scale, it stops radiating and leads to a black hole remnant. Meanwhile, the Planck scale remnant can be confirmed through the analysis of the heat capacity. Those phenomena imply that the GUP may give a way to solve the information paradox. Besides, we also investigate the possibilities to observe the black hole at the Large Hadron Collider (LHC), and the results demonstrate that the black hole cannot be produced in the recent LHC. (orig.)

  18. A novel unified expression for the capacity and bit error probability of wireless communication systems over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-07-01

    Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in past years. This paper introduces a novel moment generating function (MGF)-based unified expression for the ABEP and AC of single and multiple link communications with maximal ratio combining. In addition, this paper proposes the hyper-Fox\\'s H fading model as a unified fading distribution of a majority of the well-known generalized fading environments. As such, the authors offer a generic unified performance expression that can be easily calculated, and that is applicable to a wide variety of fading scenarios. The mathematical formulism is illustrated with some selected numerical examples that validate the correctness of the authors\\' newly derived results. © 1972-2012 IEEE.

  19. Large and almost maximal neutrino mixing within the type II see-saw mechanism

    International Nuclear Information System (INIS)

    Lindner, Manfred; Rodejohann, Werner

    2007-01-01

    Within the type II see-saw mechanism the light neutrino mass matrix is given by a sum of a direct (or triplet) mass term and the conventional (type I) see-saw term. Both versions of the see-saw mechanism explain naturally small neutrino masses, but the type II scenario offers interesting additional possibilities to explain large or almost maximal or vanishing mixings which are discussed in this paper. We first introduce 'type II enhancement' of neutrino mixing, where moderate cancellations between the two terms can lead to large neutrino mixing even if all individual mass matrices and terms generate small mixing. However, nearly maximal or vanishing mixings are not naturally explained in this way, unless there is a certain initial structure (symmetry) which enforces certain elements of the matrices to be identical or related in a special way. We therefore assume that the leading structure of the neutrino mass matrix is the triplet term and corresponds to zero U e3 and maximal θ 23 . Small but necessary corrections are generated by the conventional see-saw term. Then we assume that one of the two terms corresponds to an extreme mixing scenario, such as bimaximal or tri-bimaximal mixing. Deviations from this scheme are introduced by the second term. One can mimic Quark-Lepton Complementarity in this way. Finally, we note that the neutrino mass matrix for tri-bimaximal mixing can be-depending on the mass hierarchy-written as a sum of two terms with simple structure. Their origin could be the two terms of type II see-saw

  20. Automatic computation of radiative corrections

    International Nuclear Information System (INIS)

    Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Nakazawa, N.; Kaneko, T.

    1997-01-01

    Automated systems are reviewed focusing on their general structure and requirement specific to the calculation of radiative corrections. Detailed description of the system and its performance is presented taking GRACE as a concrete example. (author)

  1. Generalized frameworks for first-order evolution inclusions based on Yosida approximations

    Directory of Open Access Journals (Sweden)

    Ram U. Verma

    2011-04-01

    Full Text Available First, general frameworks for the first-order evolution inclusions are developed based on the A-maximal relaxed monotonicity, and then using the Yosida approximation the solvability of a general class of first-order nonlinear evolution inclusions is investigated. The role the A-maximal relaxed monotonicity is significant in the sense that it not only empowers the first-order nonlinear evolution inclusions but also generalizes the existing Yosida approximations and its characterizations in the current literature.

  2. Rank error-correcting pairs

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto; Pellikaan, Ruud

    2017-01-01

    Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...

  3. Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging

    International Nuclear Information System (INIS)

    Soltanian-Zadeh, H.; Windham, J.P.; Yagle, A.E.

    1993-01-01

    Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images

  4. SU-E-QI-03: Compartment Modeling of Dynamic Brain PET - The Effect of Scatter and Random Corrections On Parameter Errors

    International Nuclear Information System (INIS)

    Häggström, I; Karlsson, M; Larsson, A; Schmidtlein, C

    2014-01-01

    Purpose: To investigate the effects of corrections for random and scattered coincidences on kinetic parameters in brain tumors, by using ten Monte Carlo (MC) simulated dynamic FLT-PET brain scans. Methods: The GATE MC software was used to simulate ten repetitions of a 1 hour dynamic FLT-PET scan of a voxelized head phantom. The phantom comprised six normal head tissues, plus inserted regions for blood and tumor tissue. Different time-activity-curves (TACs) for all eight tissue types were used in the simulation and were generated in Matlab using a 2-tissue model with preset parameter values (K1,k2,k3,k4,Va,Ki). The PET data was reconstructed into 28 frames by both ordered-subset expectation maximization (OSEM) and 3D filtered back-projection (3DFBP). Five image sets were reconstructed, all with normalization and different additional corrections C (A=attenuation, R=random, S=scatter): Trues (AC), trues+randoms (ARC), trues+scatters (ASC), total counts (ARSC) and total counts (AC). Corrections for randoms and scatters were based on real random and scatter sinograms that were back-projected, blurred and then forward projected and scaled to match the real counts. Weighted non-linearleast- squares fitting of TACs from the blood and tumor regions was used to obtain parameter estimates. Results: The bias was not significantly different for trues (AC), trues+randoms (ARC), trues+scatters (ASC) and total counts (ARSC) for either 3DFBP or OSEM (p<0.05). Total counts with only AC stood out however, with an up to 160% larger bias. In general, there was no difference in bias found between 3DFBP and OSEM, except in parameter Va and Ki. Conclusion: According to our results, the methodology of correcting the PET data for randoms and scatters performed well for the dynamic images where frames have much lower counts compared to static images. Generally, no bias was introduced by the corrections and their importance was emphasized since omitting them increased bias extensively

  5. New developments in EPMA correction procedures

    International Nuclear Information System (INIS)

    Love, G.; Scott, V.D.

    1980-01-01

    Computer programs currently employed in converting electron-probe microanalysis (EPMA) measurements into chemical compositions are usually based upon the ZAF method in which atomic number (Z), absorption (A) and fluorescence (F) effects are corrected for separately. The established ZAF approach incorporates the atomic number correction of Duncumb and Reed or Philibert and Tixier, the simplified absorption correction of Philibert including the sigma and h values proposed by Heinrich, and the characteristic fluorescence correction of Reed. Although such programs generally operate satisfactorily they possess certain deficiencies and are prone to error when, for example, analysing for light elements (Z 25 kV) or low overvoltages ( 11) results are determined using the equations of Springer and Nolan and values for oxygen are those of Love et al. (Auth.)

  6. QED radiative corrections to impact factors

    International Nuclear Information System (INIS)

    Kuraev, E.A.; Lipatov, L.N.; Shishkina, T.V.

    2001-01-01

    We consider radiative corrections to the electron and photon impact factors. The generalized eikonal representation for the e + e - scattering amplitude at high energies and fixed momentum transfers is violated by nonplanar diagrams. An additional contribution to the two-loop approximation appears from the Bethe-Heitler mechanism of fermion pair production with the identity of the fermions in the final state taken into account. The violation of the generalized eikonal representation is also related to the charge parity conservation in QED. A one-loop correction to the photon impact factor for small virtualities of the exchanged photon is obtained using the known results for the cross section of the e + e - production during photon-nuclei interactions

  7. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices.

    Science.gov (United States)

    Westgate, Philip M

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.

  8. Dopaminergic balance between reward maximization and policy complexity

    Directory of Open Access Journals (Sweden)

    Naama eParush

    2011-05-01

    Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.

  9. A bias correction for covariance estimators to improve inference with generalized estimating equations that use an unstructured correlation matrix.

    Science.gov (United States)

    Westgate, Philip M

    2013-07-20

    Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Continuous quantum error correction for non-Markovian decoherence

    International Nuclear Information System (INIS)

    Oreshkov, Ognyan; Brun, Todd A.

    2007-01-01

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics

  11. Vocational Education in Corrections. Information Series No. 237.

    Science.gov (United States)

    Day, Sherman R.; McCane, Mel R.

    Vocational education programs in America's correctional institutions have been financially handicapped, since security demands the greatest portion of resource allocations. Four eras in the development of the correctional system are generally identified: era of punishment and retribution, era of restraint or reform, era of rehabilitation and…

  12. Vacua of maximal gauged D=3 supergravities

    International Nuclear Information System (INIS)

    Fischbacher, T; Nicolai, H; Samtleben, H

    2002-01-01

    We analyse the scalar potentials of maximal gauged three-dimensional supergravities which reveal a surprisingly rich structure. In contrast to maximal supergravities in dimensions D≥4, all these theories possess a maximally supersymmetric (N=16) ground state with negative cosmological constant Λ 2 gauged theory, whose maximally supersymmetric groundstate has Λ = 0. We compute the mass spectra of bosonic and fermionic fluctuations around these vacua and identify the unitary irreducible representations of the relevant background (super)isometry groups to which they belong. In addition, we find several stationary points which are not maximally supersymmetric, and determine their complete mass spectra as well. In particular, we show that there are analogues of all stationary points found in higher dimensions, among them are de Sitter (dS) vacua in the theories with noncompact gauge groups SO(5, 3) 2 and SO(4, 4) 2 , as well as anti-de Sitter (AdS) vacua in the compact gauged theory preserving 1/4 and 1/8 of the supersymmetries. All the dS vacua have tachyonic instabilities, whereas there do exist nonsupersymmetric AdS vacua which are stable, again in contrast to the D≥4 theories

  13. Request from the Phthalate Esters Panel of the American Chemistry Council for correction of EPA's Action Plan for Phthalate Esters

    Science.gov (United States)

    The Phthalate Esters Panel (Panel) of the American Chemistry Council submits this Request for Correction to EPA under the Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity, of Information Disseminated by the Environmental Protection Agency

  14. The effect of acute maximal exercise on postexercise hemodynamics and central arterial stiffness in obese and normal-weight individuals.

    Science.gov (United States)

    Bunsawat, Kanokwan; Ranadive, Sushant M; Lane-Cordova, Abbi D; Yan, Huimin; Kappus, Rebecca M; Fernhall, Bo; Baynard, Tracy

    2017-04-01

    Central arterial stiffness is associated with incident hypertension and negative cardiovascular outcomes. Obese individuals have higher central blood pressure (BP) and central arterial stiffness than their normal-weight counterparts, but it is unclear whether obesity also affects hemodynamics and central arterial stiffness after maximal exercise. We evaluated central hemodynamics and arterial stiffness during recovery from acute maximal aerobic exercise in obese and normal-weight individuals. Forty-six normal-weight and twenty-one obese individuals underwent measurements of central BP and central arterial stiffness at rest and 15 and 30 min following acute maximal exercise. Central BP and normalized augmentation index (AIx@75) were derived from radial artery applanation tonometry, and central arterial stiffness was obtained via carotid-femoral pulse wave velocity (cPWV) and corrected for central mean arterial pressure (cPWV/cMAP). Central arterial stiffness increased in obese individuals but decreased in normal-weight individuals following acute maximal exercise, after adjusting for fitness. Obese individuals also exhibited an overall higher central BP ( P  <   0.05), with no exercise effect. The increase in heart rate was greater in obese versus normal-weight individuals following exercise ( P  <   0.05), but there was no group differences or exercise effect for AIx@75 In conclusion, obese (but not normal-weight) individuals increased central arterial stiffness following acute maximal exercise. An assessment of arterial stiffness response to acute exercise may serve as a useful detection tool for subclinical vascular dysfunction. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.

  15. Sex differences in autonomic function following maximal exercise.

    Science.gov (United States)

    Kappus, Rebecca M; Ranadive, Sushant M; Yan, Huimin; Lane-Cordova, Abbi D; Cook, Marc D; Sun, Peng; Harvey, I Shevon; Wilund, Kenneth R; Woods, Jeffrey A; Fernhall, Bo

    2015-01-01

    Heart rate variability (HRV), blood pressure variability, (BPV) and heart rate recovery (HRR) are measures that provide insight regarding autonomic function. Maximal exercise can affect autonomic function, and it is unknown if there are sex differences in autonomic recovery following exercise. Therefore, the purpose of this study was to determine sex differences in several measures of autonomic function and the response following maximal exercise. Seventy-one (31 males and 40 females) healthy, nonsmoking, sedentary normotensive subjects between the ages of 18 and 35 underwent measurements of HRV and BPV at rest and following a maximal exercise bout. HRR was measured at minute one and two following maximal exercise. Males have significantly greater HRR following maximal exercise at both minute one and two; however, the significance between sexes was eliminated when controlling for VO2 peak. Males had significantly higher resting BPV-low-frequency (LF) values compared to females and did not significantly change following exercise, whereas females had significantly increased BPV-LF values following acute maximal exercise. Although males and females exhibited a significant decrease in both HRV-LF and HRV-high frequency (HF) with exercise, females had significantly higher HRV-HF values following exercise. Males had a significantly higher HRV-LF/HF ratio at rest; however, both males and females significantly increased their HRV-LF/HF ratio following exercise. Pre-menopausal females exhibit a cardioprotective autonomic profile compared to age-matched males due to lower resting sympathetic activity and faster vagal reactivation following maximal exercise. Acute maximal exercise is a sufficient autonomic stressor to demonstrate sex differences in the critical post-exercise recovery period.

  16. Maximizing the return on taxpayers' investments in fundamental biomedical research.

    Science.gov (United States)

    Lorsch, Jon R

    2015-05-01

    The National Institute of General Medical Sciences (NIGMS) at the U.S. National Institutes of Health has an annual budget of more than $2.3 billion. The institute uses these funds to support fundamental biomedical research and training at universities, medical schools, and other institutions across the country. My job as director of NIGMS is to work to maximize the scientific returns on the taxpayers' investments. I describe how we are optimizing our investment strategies and funding mechanisms, and how, in the process, we hope to create a more efficient and sustainable biomedical research enterprise.

  17. Eccentric exercise decreases maximal insulin action in humans

    DEFF Research Database (Denmark)

    Asp, Svend; Daugaard, J R; Kristiansen, S

    1996-01-01

    subjects participated in two euglycaemic clamps, performed in random order. One clamp was preceded 2 days earlier by one-legged eccentric exercise (post-eccentric exercise clamp (PEC)) and one was without the prior exercise (control clamp (CC)). 2. During PEC the maximal insulin-stimulated glucose uptake...... for all three clamp steps used (P maximal activity of glycogen synthase was identical in the two thighs for all clamp steps. 3. The glucose infusion rate (GIR......) necessary to maintain euglycaemia during maximal insulin stimulation was lower during PEC compared with CC (15.7%, 81.3 +/- 3.2 vs. 96.4 +/- 8.8 mumol kg-1 min-1, P maximal...

  18. Maximal sfermion flavour violation in super-GUTs

    CERN Document Server

    AUTHOR|(CDS)2108556; Velasco-Sevilla, Liliana

    2016-01-01

    We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses $m_0$ specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses $m_{1/2}$, as is expected in no-scale models, the dominant effects of renormalization between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to $m_{1/2}$ and generation-independent. In this case, the input scalar masses $m_0$ may violate flavour maximally, a scenario we call MaxFV, and there is no supersymmetric flavour problem. We illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity.

  19. Iterative CT shading correction with no prior information

    Science.gov (United States)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical

  20. Maximize x(a - x)

    Science.gov (United States)

    Lange, L. H.

    1974-01-01

    Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)

  1. Error field and its correction strategy in tokamaks

    International Nuclear Information System (INIS)

    In, Yongkyoon

    2014-01-01

    While error field correction (EFC) is to minimize the unwanted kink-resonant non-axisymmetric components, resonant magnetic perturbation (RMP) application is to maximize the benefits of pitch-resonant non-axisymmetric components. As the plasma response against non-axisymmetric field increases with beta increase, feedback-controlled EFC is a more promising EFC strategy in reactor-relevant high-beta regimes. Nonetheless, various physical aspects and uncertainties associated with EFC should be taken into account and clarified in the terms of multiple low-n EFC and multiple MHD modes, in addition to the compatibility issue with RMP application. Such a multi-faceted view of EFC strategy is briefly discussed. (author)

  2. Optics measurement and correction during beam acceleration in the Relativistic Heavy Ion Collider

    Energy Technology Data Exchange (ETDEWEB)

    Liu, C. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Marusic, A. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Minty, M. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.

    2014-09-09

    To minimize operational complexities, setup of collisions in high energy circular colliders typically involves acceleration with near constant β-functions followed by application of strong focusing quadrupoles at the interaction points (IPs) for the final beta-squeeze. At the Relativistic Heavy Ion Collider (RHIC) beam acceleration and optics squeeze are performed simultaneously. In the past, beam optics correction at RHIC has taken place at injection and at final energy with some interpolation of corrections into the acceleration cycle. Recent measurements of the beam optics during acceleration and squeeze have evidenced significant beta-beats which if corrected could minimize undesirable emittance dilutions and maximize the spin polarization of polarized proton beams by avoidance of higher-order multipole fields sampled by particles within the bunch. In this report the methodology now operational at RHIC for beam optics corrections during acceleration with simultaneous beta-squeeze will be presented together with measurements which conclusively demonstrate the superior beam control. As a valuable by-product, the corrections have minimized the beta-beat at the profile monitors so reducing the dominant error in and providing more precise measurements of the evolution of the beam emittances during acceleration.

  3. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    Science.gov (United States)

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  4. Phase correction for a Michelson interferometer with misaligned mirrors

    Science.gov (United States)

    Goorvitch, D.

    1975-01-01

    The phase correction for a Michelson interferometer with misaligned mirrors in converging light is shown to give rise to a quadratic phase shift. In general, the calculation of a spectrum from the measured interferogram needs phase correction. Phase corrections have been well worked out for the cases of a linear phase shift and a phase that is slowly varying. The standard procedures for correcting calculated spectra need to be modified, however, to remove any phase errors resulting from misaligned mirrors.

  5. Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events

    Science.gov (United States)

    DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.

  6. Corrective Action Decision Document/Closure Report for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick

    2014-01-01

    The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 105 based on the implementation of the corrective actions. Corrective action investigation (CAI) activities were performed from October 22, 2012, through May 23, 2013, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices.

  7. Classical Electron Model with QED Corrections

    OpenAIRE

    Lenk, Ron

    2010-01-01

    In this article we build a metric for a classical general relativistic electron model with QED corrections. We calculate the stress-energy tensor for the radiative corrections to the Coulomb potential in both the near-field and far-field approximations. We solve the three field equations in both cases by using a perturbative expansion to first order in alpha (the fine-structure constant) while insisting that the usual (+, +, -, -) structure of the stress-energy tensor is maintained. The resul...

  8. Entropic corrections to Newton's law

    International Nuclear Information System (INIS)

    Setare, M R; Momeni, D; Myrzakulov, R

    2012-01-01

    In this short paper, we calculate separately the generalized uncertainty principle (GUP) and self-gravitational corrections to Newton's gravitational formula. We show that for a complete description of the GUP and self-gravity effects, both the temperature and entropy must be modified. (paper)

  9. Lepton mixing in A_5 family symmetry and generalized CP

    International Nuclear Information System (INIS)

    Li, Cai-Chang; Ding, Gui-Jun

    2015-01-01

    We study lepton mixing patterns which can be derived from the A_5 family symmetry and generalized CP. We find five phenomenologically interesting mixing patterns for which one column of the PMNS matrix is (√(((5+√5)/10)),(1/(√(5+√5))),(1/(√(5+√5))))"T (the first column of the golden ratio mixing), (√(((5−√5)/10)),(1/(√(5−√5))),(1/(√(5−√5))))"T (the second column of the golden ratio mixing), (1,1,1)"T/√3 or (√5+1,−2,√5−1)"T/4. The three lepton mixing angles are determined in terms of a single real parameter θ, and agreement with experimental data can be achieved for certain values of θ. The Dirac CP violating phase is predicted to be trivial or maximal while Majorana phases are trivial. We construct a supersymmetric model based on A_5 family symmetry and generalized CP. The lepton mixing is exactly the golden ratio pattern at leading order, and the mixing patterns of case III and case IV are reproduced after higher order corrections are considered.

  10. A comparison of maximal exercise and dipyridamole thallium-201 planar gated scintigraphy

    International Nuclear Information System (INIS)

    Martin, W.; Tweddel, A.C.; Main, G.; Hutton, I.

    1992-01-01

    Both symptom-limited maximal exercise and intravenously given dipyridamole stress (0.56 mg/kg over 4 min with a 2 min walk) gated thallium scans were performed in 22 patients undergoing coronary arteriography for the assessment of chest pain. All scans were acquired gated to the electrocardiogram in 3 projections and were reported for the presence and extent of defects in 5 myocardial segments in each view. In addition, left and right ventricular myocardial uptake and estimates of right and left lung and liver to left ventricular uptake were assessed relative to the injected dose of thallium-201. Overall, 190/310 segments were abnormal with exercise compared with 169/310 with dipyridamole. Segments were scored greater in extent in 90/310 cases with exercise, compared with 46/310 in which the defect was more extensive with dipyridamole. Non-attenuation corrected percentage myocardial thallium uptakes were similar for both stresses. Left and right lung and liver to left ventricle ratios were all significantly higher with dipyridamole than with exercise. High right and left lung uptakes with dipyridamole were strongly correlated with high exercise values. The liver uptake was weakly correlated between the 2 different stress tests. These results demonstrate that dipyridamole induces fewer and less extensive thallium perfusion defects than maximal exercise, and that liver and lung to myocardial ratios are higher with dipyridamole than with exercise. (orig./MG)

  11. Generalised Batho correction factor

    International Nuclear Information System (INIS)

    Siddon, R.L.

    1984-01-01

    There are various approximate algorithms available to calculate the radiation dose in the presence of a heterogeneous medium. The Webb and Fox product over layers formulation of the generalised Batho correction factor requires determination of the number of layers and the layer densities for each ray path. It has been shown that the Webb and Fox expression is inefficient for the heterogeneous medium which is expressed as regions of inhomogeneity rather than layers. The inefficiency of the layer formulation is identified as the repeated problem of determining for each ray path which inhomogeneity region corresponds to a particular layer. It has been shown that the formulation of the Batho correction factor as a product over inhomogeneity regions avoids that topological problem entirely. The formulation in terms of a product over regions simplifies the computer code and reduces the time required to calculate the Batho correction factor for the general heterogeneous medium. (U.K.)

  12. Mechanism for Corrective Action on Budget Imbalances

    Directory of Open Access Journals (Sweden)

    Ion Lucian CATRINA

    2014-02-01

    Full Text Available The European Fiscal Compact sets the obligation for the signatory states to establish an automatic mechanism for taking corrective action on budget imbalances. Nevertheless, the European Treaty says nothing about the tools that should be used in order to reach the desired equilibrium of budgets, but only that it should aim at correcting deviations from the medium-term objective or the adjustment path, including their cumulated impact on government debt dynamics. This paper is aiming at showing that each member state has to build the correction mechanism according to the impact of the chosen tools on economic growth and on general government revenues. We will also emphasize that the correction mechanism should be built not only exacerbating the corrective action through spending/ tax based adjustments, but on a high quality package of economic policies as well.

  13. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    Science.gov (United States)

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for

  14. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    Directory of Open Access Journals (Sweden)

    Yoanna Arlina Kurnianingsih

    2015-05-01

    Full Text Available We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble and choice strategies (what gamble information influences choices within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning.We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61 to 80 years old were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic

  15. Determination and Correction of Persistent Biases in Quantum Annealers

    Science.gov (United States)

    2016-08-25

    for all of the qubits. Narrowing of the bias distribution. To show the correctability of the persistent biases , we ran the experiment described above...this is a promising application for bias correction . Importantly, while the J biases determined here are in general smaller than the h biases , numerical...1Scientific RepoRts | 6:18628 | DOI: 10.1038/srep18628 www.nature.com/scientificreports Determination and correction of persistent biases in quantum

  16. Techniques to maximize software reliability in radiation fields

    International Nuclear Information System (INIS)

    Eichhorn, G.; Piercey, R.B.

    1986-01-01

    Microprocessor system failures due to memory corruption by single event upsets (SEUs) and/or latch-up in RAM or ROM memory are common in environments where there is high radiation flux. Traditional methods to harden microcomputer systems against SEUs and memory latch-up have usually involved expensive large scale hardware redundancy. Such systems offer higher reliability, but they tend to be more complex and non-standard. At the Space Astronomy Laboratory the authors have developed general programming techniques for producing software which is resistant to such memory failures. These techniques, which may be applied to standard off-the-shelf hardware, as well as custom designs, include an implementation of Maximally Redundant Software (MRS) model, error detection algorithms and memory verification and management

  17. Maximally Informative Observables and Categorical Perception

    OpenAIRE

    Tsiang, Elaine

    2012-01-01

    We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech per...

  18. Maximal sfermion flavour violation in super-GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Olive, Keith A. [CERN, Theoretical Physics Department, Geneva (Switzerland); University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States); Velasco-Sevilla, L. [University of Bergen, Department of Physics and Technology, PO Box 7803, Bergen (Norway)

    2016-10-15

    We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m{sub 0} specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m{sub 1/2}, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m{sub 1/2} and generation independent. In this case, the input scalar masses m{sub 0} may violate flavour maximally, a scenario we call MaxSFV, and there is no supersymmetric flavour problem. We illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity. (orig.)

  19. Integration of laboratory bioassays into the risk-based corrective action process

    International Nuclear Information System (INIS)

    Edwards, D.; Messina, F.; Clark, J.

    1995-01-01

    Recent data generated by the Gas Research Institute (GRI) and others indicate that residual hydrocarbon may be bound/sequestered in soil such that it is unavailable for microbial degradation, and thus possibly not bioavailable to human/ecological receptors. A reduction in bioavailability would directly equate to reduced exposure and, therefore, potentially less-conservative risk-based cleanup soil goals. Laboratory bioassays which measure bioavailability/toxicity can be cost-effectively integrated into the risk-based corrective action process. However, in order to maximize the cost-effective application of bioassays several site-specific parameters should be addressed up front. This paper discusses (1) the evaluation of parameters impacting the application of bioassays to soils contaminated with metals and/or petroleum hydrocarbons and (2) the cost-effective integration of bioassays into a tiered ASTM type framework for risk-based corrective action

  20. Random graph states, maximal flow and Fuss-Catalan distributions

    International Nuclear Information System (INIS)

    Collins, BenoIt; Nechita, Ion; Zyczkowski, Karol

    2010-01-01

    For any graph consisting of k vertices and m edges we construct an ensemble of random pure quantum states which describe a system composed of 2m subsystems. Each edge of the graph represents a bipartite, maximally entangled state. Each vertex represents a random unitary matrix generated according to the Haar measure, which describes the coupling between subsystems. Dividing all subsystems into two parts, one may study entanglement with respect to this partition. A general technique to derive an expression for the average entanglement entropy of random pure states associated with a given graph is presented. Our technique relies on Weingarten calculus and flow problems. We analyze the statistical properties of spectra of such random density matrices and show for which cases they are described by the free Poissonian (Marchenko-Pastur) distribution. We derive a discrete family of generalized, Fuss-Catalan distributions and explicitly construct graphs which lead to ensembles of random states characterized by these novel distributions of eigenvalues.

  1. Maximally Entangled Multipartite States: A Brief Survey

    International Nuclear Information System (INIS)

    Enríquez, M; Wintrowicz, I; Życzkowski, K

    2016-01-01

    The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used. (paper)

  2. Corporate Social Responsibility and Profit Maximizing Behaviour

    OpenAIRE

    Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta

    2005-01-01

    We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.

  3. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  4. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  5. Universality of quantum gravity corrections.

    Science.gov (United States)

    Das, Saurya; Vagenas, Elias C

    2008-11-28

    We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.

  6. Automation of electroweak NLO corrections in general models

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)

    2016-07-01

    I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.

  7. Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.

    Science.gov (United States)

    Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya

    2018-05-05

    This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.

  8. Guinea pig maximization test

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner

    1985-01-01

    Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...

  9. On Maximal Non-Disjoint Families of Subsets

    Directory of Open Access Journals (Sweden)

    Yu. A. Zuev

    2017-01-01

    Full Text Available The paper studies maximal non-disjoint families of subsets of a finite set. Non-disjointness means that any two subsets of a family have a nonempty intersection. The maximality is expressed by the fact that adding a new subset to the family cannot increase its power without violating a non-disjointness condition. Studying the properties of such families is an important section of the extreme theory of sets. Along with purely combinatorial interest, the problems considered here play an important role in informatics, anti-noise coding, and cryptography.In 1961 this problem saw the light of day in the Erdos, Ko and Rado paper, which established a maximum power of the non-disjoint family of subsets of equal power. In 1974 the Erdos and Claytman publication estimated the number of maximal non-disjoint families of subsets without involving the equality of their power. These authors failed to establish an asymptotics of the logarithm of the number of such families when the power of a basic finite set tends to infinity. However, they suggested such an asymptotics as a hypothesis. A.D. Korshunov in two publications in 2003 and 2005 established the asymptotics for the number of non-disjoint families of the subsets of arbitrary powers without maximality condition of these families.The basis for the approach used in the paper to study the families of subsets is their description in the language of Boolean functions. A one-to-one correspondence between a family of subsets and a Boolean function is established by the fact that the characteristic vectors of subsets of a family are considered to be the unit sets of a Boolean function. The main theoretical result of the paper is that the maximal non-disjoint families are in one-to-one correspondence with the monotonic self-dual Boolean functions. When estimating the number of maximal non-disjoint families, this allowed us to use the result of A.A. Sapozhenko, who established the asymptotics of the number of the

  10. Optimal network structure to induce the maximal small-world effect

    International Nuclear Information System (INIS)

    Zhang Zheng-Zhen; Xu Wen-Jun; Lin Jia-Ru; Zeng Shang-You

    2014-01-01

    In this paper, the general efficiency, which is the average of the global efficiency and the local efficiency, is defined to measure the communication efficiency of a network. The increasing ratio of the general efficiency of a small-world network relative to that of the corresponding regular network is used to measure the small-world effect quantitatively. The more considerable the small-world effect, the higher the general efficiency of a network with a certain cost is. It is shown that the small-world effect increases monotonically with the increase of the vertex number. The optimal rewiring probability to induce the best small-world effect is approximately 0.02 and the optimal average connection probability decreases monotonically with the increase of the vertex number. Therefore, the optimal network structure to induce the maximal small-world effect is the structure with the large vertex number (> 500), the small rewiring probability (≍ 0.02) and the small average connection probability (< 0.1). Many previous research results support our results. (interdisciplinary physics and related areas of science and technology)

  11. Including screening in van der Waals corrected density functional theory calculations: The case of atoms and small molecules physisorbed on graphene

    Energy Technology Data Exchange (ETDEWEB)

    Silvestrelli, Pier Luigi; Ambrosetti, Alberto [Dipartimento di Fisica e Astronomia, Università di Padova, via Marzolo 8, I–35131 Padova, Italy and DEMOCRITOS National Simulation Center of the Italian Istituto Officina dei Materiali (IOM) of the Italian National Research Council (CNR), Trieste (Italy)

    2014-03-28

    The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H{sub 2}, H{sub 2}O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.

  12. Inquiry in bibliography some of the bustan`s maxim

    Directory of Open Access Journals (Sweden)

    sajjad rahmatian

    2016-12-01

    Full Text Available Sa`di is on of those poets who`s has placed a special position to preaching and guiding the people and among his works, allocated throughout the text of bustan to advice and maxim on legal and ethical various subjects. Surely, sa`di on the way of to compose this work and expression of its moral point, direct or indirect have been affected by some previous sources and possibly using their content. The main purpose of this article is that the pay review of basis and sources of bustan`s maxims and show that sa`di when expression the maxims of this work has been affected by which of the texts and works. For this purpose is tried to with search and research on the resources that have been allocated more or less to the aphorisms, to discover and extract traces of influence sa`di from their moral and didactic content. From the most important the finding of this study can be mentioned that indirect effect of some pahlavi books of maxim (like maxims of azarbad marespandan and bozorgmehr book of maxim and also noted sa`di directly influenced of moral and ethical works of poets and writers before him, and of this, sa`di`s influence from abo- shakur balkhi maxims, ferdowsi and keikavus is remarkable and noteworthy.

  13. Can monkeys make investments based on maximized pay-off?

    Directory of Open Access Journals (Sweden)

    Sophie Steelandt

    2011-03-01

    Full Text Available Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella and thirteen macaques (Macaca fascicularis, Macaca tonkeana in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible.

  14. Guinea pig maximization tests with formaldehyde releasers. Results from two laboratories

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner; Boman, A; Hamann, K

    1984-01-01

    The guinea pig maximization test was used to evaluate the sensitizing potential of formaldehyde and 6 formaldehyde releasers (Forcide 78, Germall 115, Grotan BK, Grotan OX, KM 200 and Preventol D2). The tests were carried out in 2 laboratories (Copenhagen and Stockholm), and although we intended...... the procedures to be the same, discrepancies were observed, possibly due to the use of different animal strains, test concentrations and vehicles. The sensitizing potential was in general found to be stronger in Stockholm compared to Copenhagen: formaldehyde sensitized 50% of the guinea pigs in Copenhagen and 95...

  15. NP-hardness of decoding quantum error-correction codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  16. NP-hardness of decoding quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsieh, Min-Hsiu; Le Gall, Francois

    2011-01-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  17. Maximal lattice free bodies, test sets and the Frobenius problem

    DEFF Research Database (Denmark)

    Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt

    Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral m...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....

  18. Evaluation of Maximal O[subscript 2] Uptake with Undergraduate Students at the University of La Reunion

    Science.gov (United States)

    Tarnus, Evelyne; Catan, Aurelie; Verkindt, Chantal; Bourdon, Emmanuel

    2011-01-01

    The maximal rate of O[subscript 2] consumption (VO[subscript 2max]) constitutes one of the oldest fitness indexes established for the measure of cardiorespiratory fitness and aerobic performance. Procedures have been developed in which VO[subscript 2max]is estimated from physiological responses during submaximal exercise. Generally, VO[subscript…

  19. Disk Density Tuning of a Maximal Random Packing.

    Science.gov (United States)

    Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A

    2016-08-01

    We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.

  20. Maximization of energy recovery inside supersonic separator in the presence of condensation and normal shock wave

    International Nuclear Information System (INIS)

    Shooshtari, S.H. Rajaee; Shahsavand, A.

    2017-01-01

    Natural gases provide around a quarter of energy consumptions around the globe. Supersonic separators (3S) play multifaceted role in natural gas industry processing, especially for water and hydrocarbon dew point corrections. These states of the art devices have minimum energy requirement and favorable process economy compared to conventional facilities. Their relatively large pressure drops may limit their application in some situations. To maximize the energy recovery of the dew point correction facility, the pressure loss across the 3S unit should be minimized. The optimal structure of 3s unit (including shock wave location and diffuser angle) is selected using simultaneous combination of normal shock occurrence and condensation in the presence of nucleation and growth processes. The condense-free gas enters the non-isentropic normal shock wave. The simulation results indicate that the normal shock location, pressure recovery coefficient and onset position strongly vary up to a certain diffuser angle (β = 8°) with the maximum pressure recovery of 0.88 which leads to minimum potential energy loss. Computational fluid dynamic simulations show that separation of boundary layer does not happen for the computed optimal value of β and it is essentially constant when the inlet gas temperatures and pressures vary over a relatively broad range. - Highlights: • Supersonic separators have found numerous applications in oil and gas industries. • Maximum pressure recovery is crucial for such units to maximize energy efficiency. • Simultaneous condensation and shock wave occurrence are studied for the first time. • Diverging nozzle angle of 8° can provide maximum pressure recovery of 0.88. • The optimal diffuser angle remains constant over a broad range of inlet conditions.

  1. Violating Bell inequalities maximally for two d-dimensional systems

    International Nuclear Information System (INIS)

    Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin

    2006-01-01

    We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information

  2. Descent of line bundles to GIT quotients of flag varieties by maximal torus

    OpenAIRE

    Kumar, Shrawan

    2007-01-01

    Let L be a homogeneous ample line bundle on any flag variety G/P and let T be a maximal torus of G. We prove a general necessary and sufficient condition for L to descend as a line bundle on the GIT quotient of G/P by T. We use this result to explicitly determine exactly which L descend to the GIT quotient for any simple complex algebraic group G and any parabolic subgroup P.

  3. Theoretical maximal storage of hydrogen in zeolitic frameworks.

    Science.gov (United States)

    Vitillo, Jenny G; Ricchiardi, Gabriele; Spoto, Giuseppe; Zecchina, Adriano

    2005-12-07

    Physisorption and encapsulation of molecular hydrogen in tailored microporous materials are two of the options for hydrogen storage. Among these materials, zeolites have been widely investigated. In these materials, the attained storage capacities vary widely with structure and composition, leading to the expectation that materials with improved binding sites, together with lighter frameworks, may represent efficient storage materials. In this work, we address the problem of the determination of the maximum amount of molecular hydrogen which could, in principle, be stored in a given zeolitic framework, as limited by the size, structure and flexibility of its pore system. To this end, the progressive filling with H2 of 12 purely siliceous models of common zeolite frameworks has been simulated by means of classical molecular mechanics. By monitoring the variation of cell parameters upon progressive filling of the pores, conclusions are drawn regarding the maximum storage capacity of each framework and, more generally, on framework flexibility. The flexible non-pentasils RHO, FAU, KFI, LTA and CHA display the highest maximal capacities, ranging between 2.86-2.65 mass%, well below the targets set for automotive applications but still in an interesting range. The predicted maximal storage capacities correlate well with experimental results obtained at low temperature. The technique is easily extendable to any other microporous structure, and it can provide a method for the screening of hypothetical new materials for hydrogen storage applications.

  4. MRI-Based Nonrigid Motion Correction in Simultaneous PET/MRI

    Science.gov (United States)

    Chun, Se Young; Reese, Timothy G.; Ouyang, Jinsong; Guerin, Bastien; Catana, Ciprian; Zhu, Xuping; Alpert, Nathaniel M.; El Fakhri, Georges

    2014-01-01

    Respiratory and cardiac motion is the most serious limitation to whole-body PET, resulting in spatial resolution close to 1 cm. Furthermore, motion-induced inconsistencies in the attenuation measurements often lead to significant artifacts in the reconstructed images. Gating can remove motion artifacts at the cost of increased noise. This paper presents an approach to respiratory motion correction using simultaneous PET/MRI to demonstrate initial results in phantoms, rabbits, and nonhuman primates and discusses the prospects for clinical application. Methods Studies with a deformable phantom, a free-breathing primate, and rabbits implanted with radioactive beads were performed with simultaneous PET/MRI. Motion fields were estimated from concurrently acquired tagged MR images using 2 B-spline nonrigid image registration methods and incorporated into a PET list-mode ordered-subsets expectation maximization algorithm. Using the measured motion fields to transform both the emission data and the attenuation data, we could use all the coincidence data to reconstruct any phase of the respiratory cycle. We compared the resulting SNR and the channelized Hotelling observer (CHO) detection signal-to-noise ratio (SNR) in the motion-corrected reconstruction with the results obtained from standard gating and uncorrected studies. Results Motion correction virtually eliminated motion blur without reducing SNR, yielding images with SNR comparable to those obtained by gating with 5–8 times longer acquisitions in all studies. The CHO study in dynamic phantoms demonstrated a significant improvement (166%–276%) in lesion detection SNR with MRI-based motion correction as compared with gating (P < 0.001). This improvement was 43%–92% for large motion compared with lesion detection without motion correction (P < 0.001). CHO SNR in the rabbit studies confirmed these results. Conclusion Tagged MRI motion correction in simultaneous PET/MRI significantly improves lesion detection

  5. On quantum corrected Kahler potentials in F-theory

    CERN Document Server

    García-Etxebarria, Iñaki; Savelli, Raffaele; Shiu, Gary

    2013-01-01

    We work out the exact in string coupling and perturbatively exact in \\alpha' result for the vector multiplet moduli K\\"ahler potential in a specific N=2 compactification of F-theory. The well-known correction cubic in {\\alpha}' is absent, but there is a rich structure of corrections at all even orders in \\alpha'. Moreover, each of these orders independently displays an SL(2,Z) invariant set of corrections in the string coupling. This generalizes earlier findings to the case of a non-trivial elliptic fibration. Our results pave the way for the analysis of quantum corrections in the more complicated N=1 context, and may have interesting implications for the study of moduli stabilization in string theory.

  6. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)

    2015-06-23

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.

  7. Global intensity correction in dynamic scenes

    NARCIS (Netherlands)

    Withagen, P.J.; Schutte, K.; Groen, F.C.A.

    2007-01-01

    Changing image intensities causes problems for many computer vision applications operating in unconstrained environments. We propose generally applicable algorithms to correct for global differences in intensity between images recorded with a static or slowly moving camera, regardless of the cause

  8. Real-time topic-aware influence maximization using preprocessing.

    Science.gov (United States)

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  9. Cardiorespiratory Coordination in Repeated Maximal Exercise

    Directory of Open Access Journals (Sweden)

    Sergi Garcia-Retortillo

    2017-06-01

    Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC

  10. Relativistic neoclassical transport coefficients with momentum correction

    International Nuclear Information System (INIS)

    Marushchenko, I.; Azarenkov, N.A.

    2016-01-01

    The parallel momentum correction technique is generalized for relativistic approach. It is required for proper calculation of the parallel neoclassical flows and, in particular, for the bootstrap current at fusion temperatures. It is shown that the obtained system of linear algebraic equations for parallel fluxes can be solved directly without calculation of the distribution function if the relativistic mono-energetic transport coefficients are already known. The first relativistic correction terms for Braginskii matrix coefficients are calculated.

  11. 4 CFR 28.130 - General authority.

    Science.gov (United States)

    2010-01-01

    ... Corrective Action, Disciplinary and Stay Proceedings § 28.130 General authority. The procedures in this subpart relate to the Board's functions “to consider, decide and order corrective or disciplinary action...

  12. El culto de Maximón en Guatemala

    OpenAIRE

    Pédron‑Colombani, Sylvie

    2009-01-01

    Este artículo se enfoca en la figura de Maximón, deidad sincrética de Guatemala, en un contexto de desplazamiento de la religión católica popular por parte de las iglesias protestantes. Esta divinidad híbrida a la cual se agregan santos católicos como Judas Iscariote o el dios maya Mam, permite la apropiación de Maximón por segmentos diferenciados de la población (tanto indígena como mestiza). Permite igualmente ser símbolo de protestas sociales enmascaradas cuando se asocia Maximón con figur...

  13. String Threshold corrections in models with spondaneously broken supersymmetry

    CERN Document Server

    Kiritsis, Elias B; Petropoulos, P M; Rizos, J

    1999-01-01

    We analyse a class of four-dimensional heterotic ground states with N=2 space-time supersymmetry. From the ten-dimensional perspective, such models can be viewed as compactifications on a six-dimensional manifold with SU(2) holonomy, which is locally but not globally K3 x T^2. The maximal N=4 supersymmetry is spontaneously broken to N=2. The masses of the two massive gravitinos depend on the (T,U) moduli of T^2. We evaluate the one-loop threshold corrections of gauge and R^2 couplings and we show that they fall in several universality classes, in contrast to what happens in usual K3 x T^2 compactifications, where the N=4 supersymmetry is explicitly broken to N=2, and where a single universality class appears. These universality properties follow from the structure of the elliptic genus. The behaviour of the threshold corrections as functions of the moduli is analysed in detail: it is singular across several rational lines of the T^2 moduli because of the appearance of extra massless states, and suffers only f...

  14. Dividend Maximization when Cash Reserves Follow a Jump-diffusion Process

    Institute of Scientific and Technical Information of China (English)

    LI LI-LI; FENG JIN-GHAI; SONG LI-XIN

    2009-01-01

    This paper deals with the dividend optimization problem for an insur-ance company, whose surplus follows a jump-diffusion process. The objective of the company is to maximize the expected total discounted dividends paid out until the time of ruin. Under concavity assumption on the optimal value function, the paper states some general properties and, in particular, smoothness results on the optimal value function, whose analysis mainly relies on viscosity solutions of the associated Hamilton-Jacobi-Bellman (HJB) equations. Based on these properties, the explicit expression of the optimal value function is obtained. And some numerical calculations are presented as the application of the results.

  15. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  16. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  17. Assessing the Security Vulnerabilities of Correctional Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, G.S.; Spencer, D.S.

    1998-10-27

    The National Institute of Justice has tasked their Satellite Facility at Sandia National Laboratories and their Southeast Regional Technology Center in Charleston, South Carolina to devise new procedures and tools for helping correctional facilities to assess their security vulnerabilities. Thus, a team is visiting selected correctional facilities and performing vulnerability assessments. A vulnerability assessment helps to identi~ the easiest paths for inmate escape, for introduction of contraband such as drugs or weapons, for unexpected intrusion fi-om outside of the facility, and for the perpetration of violent acts on other inmates and correctional employees, In addition, the vulnerability assessment helps to quantify the security risks for the facility. From these initial assessments will come better procedures for performing vulnerability assessments in general at other correctional facilities, as well as the development of tools to assist with the performance of such vulnerability assessments.

  18. Assessment of the Maximal Split-Half Coefficient to Estimate Reliability

    Science.gov (United States)

    Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun

    2010-01-01

    The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…

  19. Learning curves for mutual information maximization

    International Nuclear Information System (INIS)

    Urbanczik, R.

    2003-01-01

    An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed [S. Becker and G. Hinton, Nature (London) 355, 161 (1992)]. For a generic data model, I show that in the large sample limit the structure in the data is recognized by mutual information maximization. For a more restricted model, where the networks are similar to perceptrons, I calculate the learning curves for zero-temperature Gibbs learning. These show that convergence can be rather slow, and a way of regularizing the procedure is considered

  20. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  1. Geological Corrections in Gravimetry

    Science.gov (United States)

    Mikuška, J.; Marušiak, I.

    2015-12-01

    Applying corrections for the known geology to gravity data can be traced back into the first quarter of the 20th century. Later on, mostly in areas with sedimentary cover, at local and regional scales, the correction known as gravity stripping has been in use since the mid 1960s, provided that there was enough geological information. Stripping at regional to global scales became possible after releasing the CRUST 2.0 and later CRUST 1.0 models in the years 2000 and 2013, respectively. Especially the later model provides quite a new view on the relevant geometries and on the topographic and crustal densities as well as on the crust/mantle density contrast. Thus, the isostatic corrections, which have been often used in the past, can now be replaced by procedures working with an independent information interpreted primarily from seismic studies. We have developed software for performing geological corrections in space domain, based on a-priori geometry and density grids which can be of either rectangular or spherical/ellipsoidal types with cells of the shapes of rectangles, tesseroids or triangles. It enables us to calculate the required gravitational effects not only in the form of surface maps or profiles but, for instance, also along vertical lines, which can shed some additional light on the nature of the geological correction. The software can work at a variety of scales and considers the input information to an optional distance from the calculation point up to the antipodes. Our main objective is to treat geological correction as an alternative to accounting for the topography with varying densities since the bottoms of the topographic masses, namely the geoid or ellipsoid, generally do not represent geological boundaries. As well we would like to call attention to the possible distortions of the corrected gravity anomalies. This work was supported by the Slovak Research and Development Agency under the contract APVV-0827-12.

  2. Breakdown of maximality conjecture in continuous phase transitions

    International Nuclear Information System (INIS)

    Mukamel, D.; Jaric, M.V.

    1983-04-01

    A Landau-Ginzburg-Wilson model associated with a single irreducible representation which exhibits an ordered phase whose symmetry group is not a maximal isotropy subgroup of the symmetry group of the disordered phase is constructed. This example disproves the maximality conjecture suggested in numerous previous studies. Below the (continuous) transition, the order parameter points along a direction which varies with the temperature and with the other parameters which define the model. An extension of the maximality conjecture to reducible representations was postulated in the context of Higgs symmetry breaking mechanism. Our model can also be extended to provide a counter example in these cases. (author)

  3. Maximizers versus satisficers: Decision-making styles, competence, and outcomes

    OpenAIRE

    Andrew M. Parker; Wändi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al.\\ (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decision...

  4. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  5. Prior-based artifact correction (PBAC) in computed tomography

    International Nuclear Information System (INIS)

    Heußer, Thorsten; Brehm, Marcus; Ritschl, Ludwig; Sawall, Stefan; Kachelrieß, Marc

    2014-01-01

    Purpose: Image quality in computed tomography (CT) often suffers from artifacts which may reduce the diagnostic value of the image. In many cases, these artifacts result from missing or corrupt regions in the projection data, e.g., in the case of metal, truncation, and limited angle artifacts. The authors propose a generalized correction method for different kinds of artifacts resulting from missing or corrupt data by making use of available prior knowledge to perform data completion. Methods: The proposed prior-based artifact correction (PBAC) method requires prior knowledge in form of a planning CT of the same patient or in form of a CT scan of a different patient showing the same body region. In both cases, the prior image is registered to the patient image using a deformable transformation. The registered prior is forward projected and data completion of the patient projections is performed using smooth sinogram inpainting. The obtained projection data are used to reconstruct the corrected image. Results: The authors investigate metal and truncation artifacts in patient data sets acquired with a clinical CT and limited angle artifacts in an anthropomorphic head phantom data set acquired with a gantry-based flat detector CT device. In all cases, the corrected images obtained by PBAC are nearly artifact-free. Compared to conventional correction methods, PBAC achieves better artifact suppression while preserving the patient-specific anatomy at the same time. Further, the authors show that prominent anatomical details in the prior image seem to have only minor impact on the correction result. Conclusions: The results show that PBAC has the potential to effectively correct for metal, truncation, and limited angle artifacts if adequate prior data are available. Since the proposed method makes use of a generalized algorithm, PBAC may also be applicable to other artifacts resulting from missing or corrupt data

  6. Neutrino mass textures with maximal CP violation

    International Nuclear Information System (INIS)

    Aizawa, Ichiro; Kitabayashi, Teruyuki; Yasue, Masaki

    2005-01-01

    We show three types of neutrino mass textures, which give maximal CP violation as well as maximal atmospheric neutrino mixing. These textures are described by six real mass parameters: one specified by two complex flavor neutrino masses and two constrained ones and the others specified by three complex flavor neutrino masses. In each texture, we calculate mixing angles and masses, which are consistent with observed data, as well as Majorana CP phases

  7. Correcting saturation of detectors for particle/droplet imaging methods

    International Nuclear Information System (INIS)

    Kalt, Peter A M

    2010-01-01

    Laser-based diagnostic methods are being applied to more and more flows of theoretical and practical interest and are revealing interesting new flow features. Imaging particles or droplets in nephelometry and laser sheet dropsizing methods requires a trade-off of maximized signal-to-noise ratio without over-saturating the detector. Droplet and particle imaging results in lognormal distribution of pixel intensities. It is possible to fit a derived lognormal distribution to the histogram of measured pixel intensities. If pixel intensities are clipped at a saturated value, it is possible to estimate a presumed probability density function (pdf) shape without the effects of saturation from the lognormal fit to the unsaturated histogram. Information about presumed shapes of the pixel intensity pdf is used to generate corrections that can be applied to data to account for saturation. The effects of even slight saturation are shown to be a significant source of error on the derived average. The influence of saturation on the derived root mean square (rms) is even more pronounced. It is found that errors on the determined average exceed 5% when the number of saturated samples exceeds 3% of the total. Errors on the rms are 20% for a similar saturation level. This study also attempts to delineate limits, within which the detector saturation can be accurately corrected. It is demonstrated that a simple method for reshaping the clipped part of the pixel intensity histogram makes accurate corrections to account for saturated pixels. These outcomes can be used to correct a saturated signal, quantify the effect of saturation on a derived average and offer a method to correct the derived average in the case of slight to moderate saturation of pixels

  8. Comparison of changes in the mobility of the pelvic floor muscle on during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction

    OpenAIRE

    Jung, Halim; Jung, Sangwoo; Joo, Sunghee; Song, Changho

    2016-01-01

    [Purpose] The purpose of this study was to compare changes in the mobility of the pelvic floor muscle during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. [Subjects] Thirty healthy adults participated in this study (15 men and 15 women). [Methods] All participants performed a bridge exercise and abdominal curl-up during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. Pelvic floor mobility...

  9. Lovelock black holes with maximally symmetric horizons

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, Hideki; Willison, Steven; Ray, Sourya, E-mail: hideki@cecs.cl, E-mail: willison@cecs.cl, E-mail: ray@cecs.cl [Centro de Estudios CientIficos (CECs), Casilla 1469, Valdivia (Chile)

    2011-08-21

    We investigate some properties of n( {>=} 4)-dimensional spacetimes having symmetries corresponding to the isometries of an (n - 2)-dimensional maximally symmetric space in Lovelock gravity under the null or dominant energy condition. The well-posedness of the generalized Misner-Sharp quasi-local mass proposed in the past study is shown. Using this quasi-local mass, we clarify the basic properties of the dynamical black holes defined by a future outer trapping horizon under certain assumptions on the Lovelock coupling constants. The C{sup 2} vacuum solutions are classified into four types: (i) Schwarzschild-Tangherlini-type solution; (ii) Nariai-type solution; (iii) special degenerate vacuum solution; and (iv) exceptional vacuum solution. The conditions for the realization of the last two solutions are clarified. The Schwarzschild-Tangherlini-type solution is studied in detail. We prove the first law of black-hole thermodynamics and present the expressions for the heat capacity and the free energy.

  10. A scatter-corrected list-mode reconstruction and a practical scatter/random approximation technique for dynamic PET imaging

    International Nuclear Information System (INIS)

    Cheng, J-C; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna

    2007-01-01

    We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies

  11. A metapopulation model for the spread of MRSA in correctional facilities

    Directory of Open Access Journals (Sweden)

    Marc Beauparlant

    2016-10-01

    Full Text Available The spread of methicillin-resistant strains of Staphylococcus aureus (MRSA in health-care settings has become increasingly difficult to control and has since been able to spread in the general community. The prevalence of MRSA within the general public has caused outbreaks in groups of people in close quarters such as military barracks, gyms, daycare centres and correctional facilities. Correctional facilities are of particular importance for spreading MRSA, as inmates are often in close proximity and have limited access to hygienic products and clean clothing. Although these conditions are ideal for spreading MRSA, a recent study has suggested that recurrent epidemics are caused by the influx of colonized or infected individuals into the correctional facility. In this paper, we further investigate the effects of community dynamics on the spread of MRSA within the correctional facility and determine whether recidivism has a significant effect on disease dynamics. Using a simplified hotspot model ignoring disease dynamics within the correctional facility, as well as two metapopulation models, we demonstrate that outbreaks in correctional facilities can be driven by community dynamics even when spread between inmates is restricted. We also show that disease dynamics within the correctional facility and their effect on the outlying community may be ignored due to the smaller size of the incarcerated population. This will allow construction of simpler models that consider the effects of many MRSA hotspots interacting with the general community. It is suspected that the cumulative effects of hotspots for MRSA would have a stronger feedback effect in other community settings. Keywords: methicillin-resistant Staphylococcus aureus, hotspots, mathematical model, metapopulation model, Latin Hypercube Sampling

  12. 76 FR 44265 - General Working Conditions in Shipyard Employment; Correction

    Science.gov (United States)

    2011-07-25

    ... DEPARTMENT OF LABOR Occupational Safety and Health Administration 29 CFR Part 1910 [Docket No. OSHA-S049-2006-0675 (Formerly Docket No. S-049)] RIN 1218-AB50 General Working Conditions in Shipyard... on General Working Conditions in Shipyard Employment published in the Federal Register of May 2, 2011...

  13. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    OpenAIRE

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course corr...

  14. 40 CFR 1065.690 - Buoyancy correction for PM sample media.

    Science.gov (United States)

    2010-07-01

    ... mass, use a sample media density of 920 kg/m3. (3) For PTFE membrane (film) media with an integral... media. 1065.690 Section 1065.690 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if...

  15. Why firms should not always maximize profits

    OpenAIRE

    Kolstad, Ivar

    2006-01-01

    Though corporate social responsibility (CSR) is on the agenda of most major corporations, corporate executives still largely support the view that corporations should maximize the returns to their owners. There are two lines of defence for this position. One is the Friedmanian view that maximizing owner returns is the corporate social responsibility of corporations. The other is a position voiced by many executives, that CSR and profits go together. This paper argues that the first position i...

  16. Correction: An Indicator of Media Credibility

    Directory of Open Access Journals (Sweden)

    Gordana Vilović

    2010-12-01

    Full Text Available The regularity of publishing corrections, clarifi cations, and letters to the editor, entail a high level of respect among the media for their audiences as they signify accountability and media credibility.This study began on the general assumption that the Croatian media is reluctant to publish corrections regularly, projecting an image that errors simply do not occur. Certainly errorless reporting is impossible due to fact that journalism is a profession prone to human error. Therefore, this study has enacted a content analysis methodology to follow the four primary Croatian daily newspapers, Jutarnji list, Večernji list, 24 sata and Vjesnik, for the period between May 6 and 30, 2010. The primary conclusion is that Croatian newspaper editors are hesitant to publish corrections if they are not under pressure from the Media Law.

  17. Corrective Action Decision Document for Corrective Action Unit 254: Area 25 R-MAD Decontamination Facility, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    2000-01-01

    This Corrective Action Decision Document identifies and rationalizes the US Department of Energy, Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 254, R-MAD Decontamination Facility, under the Federal Facility Agreement and Consent Order. Located in Area 25 at the Nevada Test Site in Nevada, CAU 254 is comprised of Corrective Action Site (CAS) 25-23-06, Decontamination Facility. A corrective action investigation for this CAS as conducted in January 2000 as set forth in the related Corrective Action Investigation Plan. Samples were collected from various media throughout the CAS and sent to an off-site laboratory for analysis. The laboratory results indicated the following: radiation dose rates inside the Decontamination Facility, Building 3126, and in the storage yard exceeded the average general dose rate; scanning and static total surface contamination surveys indicated that portions of the locker and shower room floor, decontamination bay floor, loft floor, east and west decon pads, north and south decontamination bay interior walls, exterior west and south walls, and loft walls were above preliminary action levels (PALs). The investigation-derived contaminants of concern (COCs) included: polychlorinated biphenyls, radionuclides (strontium-90, niobium-94, cesium-137, uranium-234 and -235), total volatile and semivolatile organic compounds, total petroleum hydrocarbons, and total Resource Conservation and Recovery Act (Metals). During the investigation, two corrective action objectives (CAOs) were identified to prevent or mitigate human exposure to COCs. Based on these CAOs, a review of existing data, future use, and current operations at the Nevada Test Site, three CAAs were developed for consideration: Alternative 1 - No Further Action; Alternative 2 - Unrestricted Release Decontamination and Verification Survey; and Alternative 3 - Unrestricted

  18. Maximizing the impact of e-therapy and serious gaming: Time for a paradigm shift

    OpenAIRE

    Theresa M Fleming; Theresa M Fleming; Derek ede Beurs; Yasser eKhazaal; Andrea eGaggioli; Andrea eGaggioli; Giuseppe eRiva; Giuseppe eRiva; Cristina eBotella; Cristina eBotella; ROSA M. BAÑOS; ROSA M. BAÑOS; Filippo eAschieri; Lynda eBavin; Annet eKleiboer

    2016-01-01

    Internet interventions for mental health, including serious games, online programs and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gam...

  19. Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift

    OpenAIRE

    Fleming, Theresa M.; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Ba?os, Rosa M.; Aschieri, Filippo; Bavin, Lynda M.; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen

    2016-01-01

    Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gam...

  20. Space-Variant Post-Filtering for Wavefront Curvature Correction in Polar-Formatted Spotlight-Mode SAR Imagery

    Energy Technology Data Exchange (ETDEWEB)

    DOREN,NEALL E.

    1999-10-01

    Wavefront curvature defocus effects occur in spotlight-mode SAR imagery when reconstructed via the well-known polar-formatting algorithm (PFA) under certain imaging scenarios. These include imaging at close range, using a very low radar center frequency, utilizing high resolution, and/or imaging very large scenes. Wavefront curvature effects arise from the unrealistic assumption of strictly planar wavefronts illuminating the imaged scene. This dissertation presents a method for the correction of wavefront curvature defocus effects under these scenarios, concentrating on the generalized: squint-mode imaging scenario and its computational aspects. This correction is accomplished through an efficient one-dimensional, image domain filter applied as a post-processing step to PF.4. This post-filter, referred to as SVPF, is precalculated from a theoretical derivation of the wavefront curvature effect and varies as a function of scene location. Prior to SVPF, severe restrictions were placed on the imaged scene size in order to avoid defocus effects under these scenarios when using PFA. The SVPF algorithm eliminates the need for scene size restrictions when wavefront curvature effects are present, correcting for wavefront curvature in broadside as well as squinted collection modes while imposing little additional computational penalty for squinted images. This dissertation covers the theoretical development, implementation and analysis of the generalized, squint-mode SVPF algorithm (of which broadside-mode is a special case) and provides examples of its capabilities and limitations as well as offering guidelines for maximizing its computational efficiency. Tradeoffs between the PFA/SVPF combination and other spotlight-mode SAR image formation techniques are discussed with regard to computational burden, image quality, and imaging geometry constraints. It is demonstrated that other methods fail to exhibit a clear computational advantage over polar-formatting in conjunction

  1. Learning in neural networks based on a generalized fluctuation theorem

    Science.gov (United States)

    Hayakawa, Takashi; Aoyagi, Toshio

    2015-11-01

    Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.

  2. Maximizing band gaps in plate structures

    DEFF Research Database (Denmark)

    Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard

    2006-01-01

    periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated......Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... theoretically and experimentally and the issue of finite size effects is addressed....

  3. The key kinematic determinants of undulatory underwater swimming at maximal velocity.

    Science.gov (United States)

    Connaboy, Chris; Naemi, Roozbeh; Brown, Susan; Psycharakis, Stelios; McCabe, Carla; Coleman, Simon; Sanders, Ross

    2016-01-01

    The optimisation of undulatory underwater swimming is highly important in competitive swimming performance. Nineteen kinematic variables were identified from previous research undertaken to assess undulatory underwater swimming performance. The purpose of the present study was to determine which kinematic variables were key to the production of maximal undulatory underwater swimming velocity. Kinematic data at maximal undulatory underwater swimming velocity were collected from 17 skilled swimmers. A series of separate backward-elimination analysis of covariance models was produced with cycle frequency and cycle length as dependent variables (DVs) and participant as a fixed factor, as including cycle frequency and cycle length would explain 100% of the maximal swimming velocity variance. The covariates identified in the cycle-frequency and cycle-length models were used to form the saturated model for maximal swimming velocity. The final parsimonious model identified three covariates (maximal knee joint angular velocity, maximal ankle angular velocity and knee range of movement) as determinants of the variance in maximal swimming velocity (adjusted-r2 = 0.929). However, when participant was removed as a fixed factor there was a large reduction in explained variance (adjusted r2 = 0.397) and only maximal knee joint angular velocity continued to contribute significantly, highlighting its importance to the production of maximal swimming velocity. The reduction in explained variance suggests an emphasis on inter-individual differences in undulatory underwater swimming technique and/or anthropometry. Future research should examine the efficacy of other anthropometric, kinematic and coordination variables to better understand the production of maximal swimming velocity and consider the importance of individual undulatory underwater swimming techniques when interpreting the data.

  4. Thermodynamics in modified Brans-Dicke gravity with entropy corrections

    Energy Technology Data Exchange (ETDEWEB)

    Rani, Shamaila; Jawad, Abdul; Nawaz, Tanzeela [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Manzoor, Rubab [University of Management and Technology, Department of Mathematics, Lahore (Pakistan)

    2018-01-15

    In this paper, we investigate the thermodynamics in the frame-work of recently proposed theory called modified Brans-Dicke gravity (Kofinas et al. in Class Quantum Gravity 33:15, 2016). For this purpose, we develop the generalized second law of thermodynamics by assuming usual entropy as well as its corrected forms (logarithmic and power law corrected) on the apparent and event horizons. In order to analyzed the clear view of thermodynamic law, the power law forms of scalar field and scale factor is being assumed. We evaluate the results graphically and found that generalized second law of thermodynamics holds in most of the cases. (orig.)

  5. Thermodynamics in modified Brans-Dicke gravity with entropy corrections

    International Nuclear Information System (INIS)

    Rani, Shamaila; Jawad, Abdul; Nawaz, Tanzeela; Manzoor, Rubab

    2018-01-01

    In this paper, we investigate the thermodynamics in the frame-work of recently proposed theory called modified Brans-Dicke gravity (Kofinas et al. in Class Quantum Gravity 33:15, 2016). For this purpose, we develop the generalized second law of thermodynamics by assuming usual entropy as well as its corrected forms (logarithmic and power law corrected) on the apparent and event horizons. In order to analyzed the clear view of thermodynamic law, the power law forms of scalar field and scale factor is being assumed. We evaluate the results graphically and found that generalized second law of thermodynamics holds in most of the cases. (orig.)

  6. Kinetic theory in maximal-acceleration invariant phase space

    International Nuclear Information System (INIS)

    Brandt, H.E.

    1989-01-01

    A vanishing directional derivative of a scalar field along particle trajectories in maximal acceleration invariant phase space is identical in form to the ordinary covariant Vlasov equation in curved spacetime in the presence of both gravitational and nongravitational forces. A natural foundation is thereby provided for a covariant kinetic theory of particles in maximal-acceleration invariant phase space. (orig.)

  7. Radiative corrections of semileptonic hyperon decays Pt. 1

    International Nuclear Information System (INIS)

    Margaritisz, T.; Szegoe, K.; Toth, K.

    1982-07-01

    The beta decay of free quarks is studied in the framework of the standard SU(2) x U(1) model of weak and electromagnetic interactions. The so-called 'weak' part of radiative corrections is evaluated to order α in one-loop approximation using a renormalization scheme, which adjusts the counter terms to the electric charge, and to the mass of the charged and neutral vector bosons, Msub(w) and Msub(o), respectively. The obtained result is, to a good approximation, equal with the 'weak' part of radiative corrections for the semileptonic decay of any hyperon. It is shown in the model that the methods, which work excellently in case of the 'weak' corrections, do not, in general, provide us with the dominant part of the 'photonic' corrections. (author)

  8. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  9. RSA and its Correctness through Modular Arithmetic

    Science.gov (United States)

    Meelu, Punita; Malik, Sitender

    2010-11-01

    To ensure the security to the applications of business, the business sectors use Public Key Cryptographic Systems (PKCS). An RSA system generally belongs to the category of PKCS for both encryption and authentication. This paper describes an introduction to RSA through encryption and decryption schemes, mathematical background which includes theorems to combine modular equations and correctness of RSA. In short, this paper explains some of the maths concepts that RSA is based on, and then provides a complete proof that RSA works correctly. We can proof the correctness of RSA through combined process of encryption and decryption based on the Chinese Remainder Theorem (CRT) and Euler theorem. However, there is no mathematical proof that RSA is secure, everyone takes that on trust!.

  10. Pairing correction of particle-hole state densities for two kinds of Fermions

    International Nuclear Information System (INIS)

    Fu, C.Y.

    1985-01-01

    Pairing corrections in particle-hole (exciton) state-density formulas used in precompound nuclear reaction theories are, strictly speaking, dependent on the nuclear excitation energy U and the exciton number n. A general formula for (U,n)-dependent pairing corrections has been derived in an earlier paper for exciton state-density formulas for one kind of Fermion. In the present paper, a similar derivation is made for two kinds of Fermions. It is shown that the constant-pairing-energy correction used in standard level-density formulas, such as U 0 in Gilbert and Cameron, is a limiting case of the present general (U,n)-dependent results

  11. Maximal slicing of D-dimensional spherically symmetric vacuum spacetime

    International Nuclear Information System (INIS)

    Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru

    2009-01-01

    We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D≥5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.

  12. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

    Science.gov (United States)

    Griffiths, Lauren; Higham, Philip A

    2018-02-01

    Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

  13. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...... distances, and an Expectation-Maximization algorithm. Methods tended to perform better on contemporary datasets; bias correction did not significantly improve method performance. Mesial sections were most difficult for all methods. Although AD image sets were most difficult to strip, HWA and BSE were more...

  14. Left ventricle expands maximally preceding end-diastole. Radionuclide ventriculography study

    International Nuclear Information System (INIS)

    Horinouchi, Osamu

    2002-01-01

    It has been considered that left ventricle (LV) expands maximally at the end-diastole. However, is it exactly coincident with this point? This study was aimed to determine whether the maximal expansion of LV coincides with the peak of R wave on electrocardiogram. Thirty-three angina pectoris patients with normal LV motion were examined using radionuclide ventriculography. Data were obtained from every 30 ms backward frame from the peak of R wave. All patients showed the time of maximal expansion preceded the peak of R wave. The intervals from the peak of R wave and the onset of P wave to maximal expansion of LV was 105±29 ms and 88±25 ms, respectively. This period corresponds to the timing of maximal excurtion of mitral valve by atrial contraction, and the centripetal motion of LV without losing its volume before end-diastole may be interpreted on account of the movement of mitral valve toward closure. These findings suggest that LV expands maximally between P and R wave after atrial contraction, preceding the peak of R wave thought conventionally as the end-diastole. (author)

  15. Interacting holographic dark energy with logarithmic correction

    International Nuclear Information System (INIS)

    Jamil, Mubasher; Farooq, M. Umar

    2010-01-01

    The holographic dark energy (HDE) is considered to be the most promising candidate of dark energy. Its definition is motivated from the entropy-area relation which depends on the theory of gravity under consideration. Recently a new definition of HDE is proposed with the help of quantum corrections to the entropy-area relation in the setup of loop quantum cosmology. Employing this new definition, we investigate the model of interacting dark energy and derive its effective equation of state. Finally we establish a correspondence between generalized Chaplygin gas and entropy-corrected holographic dark energy

  16. T-branes and α{sup ′}-corrections

    Energy Technology Data Exchange (ETDEWEB)

    Marchesano, Fernando; Schwieger, Sebastian [Instituto de Física Teórica UAM-CSIC,Cantoblanco, 28049 Madrid (Spain)

    2016-11-21

    We study α’-corrections in multiple D7-brane configurations with non-commuting profiles for their transverse position fields. We focus on T-brane systems, crucial in F-theory GUT model building. There α{sup ′}-corrections modify the D-term piece of the BPS equations which, already at leading order, require a non-primitive Abelian worldvolume flux background. We find that α{sup ′}-corrections may either i) leave this flux background invariant, ii) modify the Abelian non-primitive flux profile, or iii) deform it to a non-Abelian profile. The last case typically occurs when primitive fluxes, a necessary ingredient to build 4d chiral models, are added to the system. We illustrate these three cases by solving the α{sup ′}-corrected D-term equations in explicit examples, and describe their appearance in more general T-brane backgrounds. Finally, we discuss implications of our findings for F-theory GUT local models.

  17. Quantum corrections to inflaton and curvaton dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@nbi.dk [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2012-11-01

    We compute the fully renormalized one-loop effective action for two interacting and self-interacting scalar fields in FRW space-time. We then derive and solve the quantum corrected equations of motion both for fields that dominate the energy density (such as an inflaton) and fields that do not (such as a subdominant curvaton). In particular, we introduce quantum corrected Friedmann equations that determine the evolution of the scale factor. We find that in general, gravitational corrections are negligible for the field dynamics. For the curvaton-type fields this leaves only the effect of the flat-space Coleman-Weinberg-type effective potential, and we find that these can be significant. For the inflaton case, both the corrections to the potential and the Friedmann equations can lead to behaviour very different from the classical evolution. Even to the point that inflation, although present at tree level, can be absent at one-loop order.

  18. Maximization of regional probabilities using Optimal Surface Graphs

    DEFF Research Database (Denmark)

    Arias Lorza, Andres M.; Van Engelen, Arna; Petersen, Jens

    2018-01-01

    Purpose: We present a segmentation method that maximizes regional probabilities enclosed by coupled surfaces using an Optimal Surface Graph (OSG) cut approach. This OSG cut determines the globally optimal solution given a graph constructed around an initial surface. While most methods for vessel...... wall segmentation only use edge information, we show that maximizing regional probabilities using an OSG improves the segmentation results. We applied this to automatically segment the vessel wall of the carotid artery in magnetic resonance images. Methods: First, voxel-wise regional probability maps...... were obtained using a Support Vector Machine classifier trained on local image features. Then, the OSG segments the regions which maximizes the regional probabilities considering smoothness and topological constraints. Results: The method was evaluated on 49 carotid arteries from 30 subjects...

  19. Kinematic power corrections in off-forward hard reactions.

    Science.gov (United States)

    Braun, V M; Manashov, A N

    2011-11-11

    We develop a general approach to the calculation of kinematic corrections ∝t/Q(2), m(2)/Q(2) in hard processes which involve momentum transfer from the initial to the final hadron state. As the principal result, the complete expression is derived for the time-ordered product of two electromagnetic currents that includes all kinematic corrections to twist-four accuracy. The results are immediately applicable, e.g., to the studies of deeply virtual Compton scattering.

  20. On the Correct Formulation of the First Law of Thermodynamics

    Science.gov (United States)

    Kalanov, Temur Z.

    2006-04-01

    The critical analysis of the generally accepted formulation of the first law of thermodynamics is proposed. The purpose of the analysis is to prove that the standard formulation contains a mathematical error and to offer the correct formulation. The correct formulation is based on the concepts of function and differential of function. Really, if internal energy Uof a system is a function of two independent variables Q=Q(t) (describing of the thermal form of energy) and R=R(t) (describing non-thermal form of energy), then the correct formulation of the first law of thermodynamics is: dU(Q,R)dt=( UQ )RdQdt+( UR )QdRdt, where t and -( UR )Q / ( UR )Q ( UQ ) . - ( UQ )R are time and measure of mutual transformation of forms of energy, correspondingly. General conclusion: standard thermodynamics is incorrect.

  1. Singularity Structure of Maximally Supersymmetric Scattering Amplitudes

    DEFF Research Database (Denmark)

    Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy

    2014-01-01

    We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....

  2. Identities on maximal subgroups of GLn(D)

    International Nuclear Information System (INIS)

    Kiani, D.; Mahdavi-Hezavehi, M.

    2002-04-01

    Let D be a division ring with centre F. Assume that M is a maximal subgroup of GL n (D), n≥1 such that Z(M) is algebraic over F. Group identities on M and polynomial identities on the F-linear hull F[M] are investigated. It is shown that if F[M] is a PI-algebra, then [D:F] n (D) and M is a maximal subgroup of N. If M satisfies a group identity, it is shown that M is abelian-by-finite. (author)

  3. Adaptive maximal poisson-disk sampling on surfaces

    KAUST Repository

    Yan, Dongming

    2012-01-01

    In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which is the key ingredient of the adaptive maximal Poisson-disk sampling framework. Moreover, we adapt the presented sampling framework for remeshing applications. Several novel and efficient operators are developed for improving the sampling/meshing quality over the state-of-theart. © 2012 ACM.

  4. Increases of QT dispersion, corrected QT dispersion and QT interval in young healthy individuals during Ramadan fasting

    Directory of Open Access Journals (Sweden)

    Moradmand S

    2003-06-01

    Full Text Available Ramadan fasting is one of the most important religious duties of Muslims, that its effect on the heart has not been determined yet. Our objective was to evaluate the effect of Ramadan fasting on ventricular repolarization as assessed by QT interval, corrected QT interval, QT dispersion or corrected QT dispersion. Sixthy healthy subjects aged 20 to 35 years were dispersion included in this study. QT interval, corrected QT interval (QTc QT dispersion QTc dispersion, RR interval and QRS axis were measured in 12-lead surface electrocardiogram, once during fasting (10 to 11.5 hours of absolute fasting from food and liquid and another time, 15 tp 60 minutes after eating food at sunset, All of the subjects had been fasting 11 to 12 hours each day at least for 25 days during Ramadan. The study was performed at Amir Alam hospital in the year 2000. Maximal QT interval, mean QT interval and RR-interval, were longer during fasting (P<0.05, and both QT dispersion and QTc dispersion were increased (P<0.05. (QT dispersion: mean ±SD= 57.2±20.1 ms during fasting Vs 41.6±15.1 ms after meal, QTc dispersion=75.4±24.6 ms during fasting Vs 64.1±22.8 ms after meal. But mean QTc interval maximal QTc interval and QRS axis showed no significant difference. Prolongation of QT interval and RR interval during fasting, instead of no significant changes in corrected QT interval may primarily suggest that prolongation of RR-interval causes QTc interval not to have significant difference. But increases of QT dispersion and corrected QT dispersion (QTc dispersion during fasting -that are more reliable indicators of ventricular repolarization-support the idea that ventricular repolarization may be changed during Ramadan fasting. QT dispersion in cardiac patients is showed to increase from normal values of 30-40 to 64-138 ms, but in our study their increases did not reach critical value.

  5. Height drift correction in non-raster atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Travis R. [Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095 (United States); Ziegler, Dominik [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Brune, Christoph [Institute for Computational and Applied Mathematics, University of Münster (Germany); Chen, Alex [Statistical and Applied Mathematical Sciences Institute, Research Triangle Park, NC 27709 (United States); Farnham, Rodrigo; Huynh, Nen; Chang, Jen-Mei [Department of Mathematics and Statistics, California State University Long Beach, Long Beach, CA 90840 (United States); Bertozzi, Andrea L., E-mail: bertozzi@math.ucla.edu [Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095 (United States); Ashby, Paul D., E-mail: pdashby@lbl.gov [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2014-02-01

    We propose a novel method to detect and correct drift in non-raster scanning probe microscopy. In conventional raster scanning drift is usually corrected by subtracting a fitted polynomial from each scan line, but sample tilt or large topographic features can result in severe artifacts. Our method uses self-intersecting scan paths to distinguish drift from topographic features. Observing the height differences when passing the same position at different times enables the reconstruction of a continuous function of drift. We show that a small number of self-intersections is adequate for automatic and reliable drift correction. Additionally, we introduce a fitness function which provides a quantitative measure of drift correctability for any arbitrary scan shape. - Highlights: • We propose a novel height drift correction method for non-raster SPM. • Self-intersecting scans enable the distinction of drift from topographic features. • Unlike conventional techniques our method is unsupervised and tilt-invariant. • We introduce a fitness measure to quantify correctability for general scan paths.

  6. Is the β phase maximal?

    International Nuclear Information System (INIS)

    Ferrandis, Javier

    2005-01-01

    The current experimental determination of the absolute values of the CKM elements indicates that 2 vertical bar V ub /V cb V us vertical bar =(1-z), with z given by z=0.19+/-0.14. This fact implies that irrespective of the form of the quark Yukawa matrices, the measured value of the SM CP phase β is approximately the maximum allowed by the measured absolute values of the CKM elements. This is β=(π/6-z/3) for γ=(π/3+z/3), which implies α=π/2. Alternatively, assuming that β is exactly maximal and using the experimental measurement sin(2β)=0.726+/-0.037, the phase γ is predicted to be γ=(π/2-β)=66.3 o +/-1.7 o . The maximality of β, if confirmed by near-future experiments, may give us some clues as to the origin of CP violation

  7. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  8. Influence of rotation and FLR corrections on selfgravitational Jeans instability in quantum plasma

    International Nuclear Information System (INIS)

    Jain, Shweta; Sharma, Prerana; Chhajlani, R K

    2014-01-01

    In the present work, the self-gravitational instability of quantum plasma is investigated including the effects of finite Larmor radius corrections (FLR) and rotation. The formulation is done employing quantum magnetohydrodynamic (QMHD) model. The plane wave solutions are employed on the linearized perturbed QMHD set of equations to obtain the general dispersion relation. The rotation is assumed only along the z- direction. The general dispersion relation is further reduced for transverse and longitudinal directions of propagation. It is found that in transverse direction of propagation the Jeans criterion is modified due to the rotation, FLR and quantum corrections while in longitudinal direction of propagation it is observed that the Jeans criterion is modified by quantum corrections only. The growth rate of perturbation is discussed numerically including the considered parameters FLR and quantum corrections. The growth rate is observed to be modified significantly due to the quantum correction and FLR effects.

  9. Measuring specific, rather than generalized, cognitive deficits and maximizing between-group effect size in studies of cognition and cognitive change.

    Science.gov (United States)

    Silverstein, Steven M

    2008-07-01

    While cognitive impairment in schizophrenia is easy to demonstrate, it has been much more difficult to measure a specific cognitive process unconfounded by the influence of other cognitive processes and noncognitive factors (eg, sedation, low motivation) that affect test scores. With the recent interest in the identification of neurophysiology-linked cognitive probes for clinical trials, the issue of isolating specific cognitive processes has taken on increased importance. Recent advances in research design and psychometric theory regarding cognition research in schizophrenia demonstrate the importance of (1) maximizing between-group differences via reduction of measurement error during both test development and subsequent research and (2) the development and use of process-specific tasks in which theory-driven performance indices are derived across multiple conditions. Use of these 2 strategies can significantly advance both our understanding of schizophrenia and measurement sensitivity for clinical trials. Novel data-analytic strategies for analyzing change across multiple conditions and/or multiple time points also allow for increased reliability and greater measurement sensitivity than traditional strategies. Following discussion of these issues, trade-offs inherent to attempts to address psychometric issues in schizophrenia research are reviewed. Finally, additional considerations for maximizing sensitivity and real-world significance in clinical trials are discussed.

  10. [Risk of deaths from cardiovascular diseases in Polish urban population associated with changes in maximal daily temperature].

    Science.gov (United States)

    Rabczenko, Daniel; Wojtyniak, Bogdan; Kuchcik, Magdalena; Seroka, Wojciech

    2009-01-01

    The paper presents results of analysis of short-term effect of changes in maximal daily temperature on daily mortality from cardiovascular diseases in warm season in years 1999-2006. Analysis was carried out in six large Polish cities--Katowice, Kraków, Łódź, Poznań, Warszawa and Wrocław. Generalized additive models were used in the analysis. Potential confounding factors--long term changes of mortality, day of week and other meteorological factors (atmospheric pressure, humidity, mean wind speed) were taken into account during model building process. Analysis was done for two age groups--0-69 and 70 years and older. Significant, positive association between daily maximal temperature and risk of death from cardiovascular diseases was found only in older age group.

  11. Leading quantum gravitational corrections to scalar QED

    International Nuclear Information System (INIS)

    Bjerrum-Bohr, N.E.J.

    2002-01-01

    We consider the leading post-Newtonian and quantum corrections to the non-relativistic scattering amplitude of charged scalars in the combined theory of general relativity and scalar QED. The combined theory is treated as an effective field theory. This allows for a consistent quantization of the gravitational field. The appropriate vertex rules are extracted from the action, and the non-analytic contributions to the 1-loop scattering matrix are calculated in the non-relativistic limit. The non-analytical parts of the scattering amplitude, which are known to give the long range, low energy, leading quantum corrections, are used to construct the leading post-Newtonian and quantum corrections to the two-particle non-relativistic scattering matrix potential for two charged scalars. The result is discussed in relation to experimental verifications

  12. Quantum speedup in solving the maximal-clique problem

    Science.gov (United States)

    Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang

    2018-03-01

    The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.

  13. Attenuation correction strategies for multi-energy photon emitters using SPECT

    International Nuclear Information System (INIS)

    Pretorius, P.H.; King, M.A.; Pan, T.S.

    1996-01-01

    The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR's of sphere 4 were under-estimated, although TCR's were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately

  14. Muscle mitochondrial capacity exceeds maximal oxygen delivery in humans

    DEFF Research Database (Denmark)

    Boushel, Robert Christopher; Gnaiger, Erich; Calbet, Jose A L

    2011-01-01

    Across a wide range of species and body mass a close matching exists between maximal conductive oxygen delivery and mitochondrial respiratory rate. In this study we investigated in humans how closely in-vivo maximal oxygen consumption (VO(2) max) is matched to state 3 muscle mitochondrial respira...

  15. Bias Correction with Jackknife, Bootstrap, and Taylor Series

    OpenAIRE

    Jiao, Jiantao; Han, Yanjun; Weissman, Tsachy

    2017-01-01

    We analyze the bias correction methods using jackknife, bootstrap, and Taylor series. We focus on the binomial model, and consider the problem of bias correction for estimating $f(p)$, where $f \\in C[0,1]$ is arbitrary. We characterize the supremum norm of the bias of general jackknife and bootstrap estimators for any continuous functions, and demonstrate the in delete-$d$ jackknife, different values of $d$ may lead to drastically different behavior in jackknife. We show that in the binomial ...

  16. Computer simulation of photorefractive keratectomy for the correction of myopia and hyperopia

    Science.gov (United States)

    Pinault, Pascal; 'Huillier, J. P.

    1996-01-01

    Photorefractive keratectomy (PRK) performed by means of the 193 nm excimer laser has stimulated considerable interest in the ophthalmic community because this new procedure has the potential to correct myopia, hyperopia, and astigmatism. The use of a laser beam to remove a controlled amount of tissue from the cornea implies that both the energy density of the laser beam and the target removed rate are accurately known. In addition, the required tissue ablation profile to achieve refractive correction must be predicted by optical calculations. This paper investigates: (1) Optical computations based on raytracing model to determine what anterior profile of cornea is needed postoperatively for ametropia. (2) Maximal depth of the removed corneal tissue against the ablation zone treated. And (3) the thickness of ablated corneal lenticule at any distance from the optical axis. Relationships between these data are well fitted by polynomial regressive curves in order to be useful as an algorithm in the computer-controlled delivery of the ArF laser beam.

  17. Non-Abelian black holes in D=5 maximal gauged supergravity

    International Nuclear Information System (INIS)

    Cvetic, M.; Lue, H.; Pope, C. N.

    2010-01-01

    We investigate static non-Abelian black hole solutions of anti-de Sitter (AdS) Einstein-Yang-Mills-dilaton gravity, which is obtained as a consistent truncation of five-dimensional maximal gauged supergravity. If the dilaton is (consistently) set to zero, the remaining equations of motion, with a spherically-symmetric ansatz, may be derived from a superpotential. The associated first-order equations admit an explicit solution supported by a non-Abelian SU(2) gauge potential, which has a logarithmically growing mass term. In an extremal limit the horizon geometry becomes AdS 2 xS 3 . If the dilaton is also excited, the equations of motion cannot easily be solved explicitly, but we obtain the asymptotic form of the more general non-Abelian black holes in this case. An alternative consistent truncation, in which the Yang-Mills fields are set to zero, also admits a description in terms of a superpotential. This allows us to construct explicit wormhole solutions (neutral spherically-symmetric domain walls). These solutions may be generalized to dimensions other than five.

  18. Maximization of learning speed in the motor cortex due to neuronal redundancy.

    Directory of Open Access Journals (Sweden)

    Ken Takiyama

    2012-01-01

    Full Text Available Many redundancies play functional roles in motor control and motor learning. For example, kinematic and muscle redundancies contribute to stabilizing posture and impedance control, respectively. Another redundancy is the number of neurons themselves; there are overwhelmingly more neurons than muscles, and many combinations of neural activation can generate identical muscle activity. The functional roles of this neuronal redundancy remains unknown. Analysis of a redundant neural network model makes it possible to investigate these functional roles while varying the number of model neurons and holding constant the number of output units. Our analysis reveals that learning speed reaches its maximum value if and only if the model includes sufficient neuronal redundancy. This analytical result does not depend on whether the distribution of the preferred direction is uniform or a skewed bimodal, both of which have been reported in neurophysiological studies. Neuronal redundancy maximizes learning speed, even if the neural network model includes recurrent connections, a nonlinear activation function, or nonlinear muscle units. Furthermore, our results do not rely on the shape of the generalization function. The results of this study suggest that one of the functional roles of neuronal redundancy is to maximize learning speed.

  19. PedMine – A simulated annealing algorithm to identify maximally unrelated individuals in population isolates

    OpenAIRE

    Douglas, Julie A.; Sandefur, Conner I.

    2008-01-01

    In family-based genetic studies, it is often useful to identify a subset of unrelated individuals. When such studies are conducted in population isolates, however, most if not all individuals are often detectably related to each other. To identify a set of maximally unrelated (or equivalently, minimally related) individuals, we have implemented simulated annealing, a general-purpose algorithm for solving difficult combinatorial optimization problems. We illustrate our method on data from a ge...

  20. Teleportation of an unknown bipartite state via non-maximally entangled two-particle state

    Institute of Scientific and Technical Information of China (English)

    Cao Hai-Jing; Guo Yan-Qing; Song He-Shan

    2006-01-01

    In this paper a new scheme for teleporting an unknown entangled state of two particles is proposed. To weaken the requirement for the quantum channel, without loss of generality, two communicators only share a non-maximally entangled two-particle state. Teleportation can be probabilistically realized if sender performs Bell-state measurements and Hadamard transformation and receiver introduces two auxiliary particles, operates G-not operation, single-qubit measurements and appropriate unitary transformations. The probability of successful teleportation is determined by the smaller one among the coefficients' absolute values of the quantum channel.

  1. Mass corrections in deep-inelastic scattering

    International Nuclear Information System (INIS)

    Gross, D.J.; Treiman, S.B.; Wilczek, F.A.

    1977-01-01

    The moment sum rules for deep-inelastic lepton scattering are expected for asymptotically free field theories to display a characteristic pattern of logarithmic departures from scaling at large enough Q 2 . In the large-Q 2 limit these patterns do not depend on hadron or quark masses m. For modest values of Q 2 one expects corrections at the level of powers of m 2 /Q 2 . We discuss the question whether these mass effects are accessible in perturbation theory, as applied to the twist-2 Wilson coefficients and more generally. Our conclusion is that some part of the mass effects must arise from a nonperturbative origin. We also discuss the corrections which arise from higher orders in perturbation theory for very large Q 2 , where mass effects can perhaps be ignored. The emphasis here is on a characterization of the Q 2 , x domain where higher-order corrections are likely to be unimportant

  2. Asymptotic Expansions of Generalized Nevanlinna Functions and their Spectral Properties

    NARCIS (Netherlands)

    Derkach, Vladimir; Hassi, Seppo; de Snoo, Hendrik

    2007-01-01

    Asymptotic expansions of generalized Nevanlinna functions Q are investigated by means of a factorization model involving a part of the generalized zeros and poles of nonpositive type of the function Q. The main results in this paper arise from the explicit construction of maximal Jordan chains in

  3. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  4. Enumerating all maximal frequent subtrees in collections of phylogenetic trees.

    Science.gov (United States)

    Deepak, Akshay; Fernández-Baca, David

    2014-01-01

    A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.

  5. A constrained maximization formulation to analyze deformation of fiber reinforced elastomeric actuators

    Science.gov (United States)

    Singh, Gaurav; Krishnan, Girish

    2017-06-01

    Fiber reinforced elastomeric enclosures (FREEs) are soft and smart pneumatic actuators that deform in a predetermined fashion upon inflation. This paper analyzes the deformation behavior of FREEs by formulating a simple calculus of variations problem that involves constrained maximization of the enclosed volume. The model accurately captures the deformed shape for FREEs with any general fiber angle orientation, and its relation with actuation pressure, material properties and applied load. First, the accuracy of the model is verified with existing literature and experiments for the popular McKibben pneumatic artificial muscle actuator with two equal and opposite families of helically wrapped fibers. Then, the model is used to predict and experimentally validate the deformation behavior of novel rotating-contracting FREEs, for which no prior literature exist. The generality of the model enables conceptualization of novel FREEs whose fiber orientations vary arbitrarily along the geometry. Furthermore, the model is deemed to be useful in the design synthesis of fiber reinforced elastomeric actuators for general axisymmetric desired motion and output force requirement.

  6. Manifold corrections on spinning compact binaries

    International Nuclear Information System (INIS)

    Zhong Shuangying; Wu Xin

    2010-01-01

    This paper deals mainly with a discussion of three new manifold correction methods and three existing ones, which can numerically preserve or correct all integrals in the conservative post-Newtonian Hamiltonian formulation of spinning compact binaries. Two of them are listed here. One is a new momentum-position scaling scheme for complete consistency of both the total energy and the magnitude of the total angular momentum, and the other is the Nacozy's approach with least-squares correction of the four integrals including the total energy and the total angular momentum vector. The post-Newtonian contributions, the spin effects, and the classification of orbits play an important role in the effectiveness of these six manifold corrections. They are all nearly equivalent to correct the integrals at the level of the machine epsilon for the pure Kepler problem. Once the third-order post-Newtonian contributions are added to the pure orbital part, three of these corrections have only minor effects on controlling the errors of these integrals. When the spin effects are also included, the effectiveness of the Nacozy's approach becomes further weakened, and even gets useless for the chaotic case. In all cases tested, the new momentum-position scaling scheme always shows the optimal performance. It requires a little but not much expensive additional computational cost when the spin effects exist and several time-saving techniques are used. As an interesting case, the efficiency of the correction to chaotic eccentric orbits is generally better than one to quasicircular regular orbits. Besides this, the corrected fast Lyapunov indicators and Lyapunov exponents of chaotic eccentric orbits are large as compared with the uncorrected counterparts. The amplification is a true expression of the original dynamical behavior. With the aid of both the manifold correction added to a certain low-order integration algorithm as a fast and high-precision device and the fast Lyapunov

  7. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  8. First order correction to quasiclassical scattering amplitude

    International Nuclear Information System (INIS)

    Kuz'menko, A.V.

    1978-01-01

    First order (with respect to h) correction to quasiclassical with the aid of scattering amplitude in nonrelativistic quantum mechanics is considered. This correction is represented by two-loop diagrams and includes the double integrals. With the aid of classical equations of motion, the sum of the contributions of the two-loop diagrams is transformed into the expression which includes one-dimensional integrals only. The specific property of the expression obtained is that the integrand does not possess any singularities in the focal points of the classical trajectory. The general formula takes much simpler form in the case of one-dimensional systems

  9. A New Class of Scaling Correction Methods

    International Nuclear Information System (INIS)

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  10. The relativistic Scott correction for atoms and molecules

    DEFF Research Database (Denmark)

    Solovej, Jan Philip; Sørensen, Thomas Østergaard; Spitzer, Wolfgang L.

    We prove the first correction to the leading Thomas-Fermi energy for the ground state energy of atoms and molecules in a model where the kinetic energy of the electrons is treated relativistically. The leading Thomas-Fermi energy, established in [25], as well as the correction given here are of s......We prove the first correction to the leading Thomas-Fermi energy for the ground state energy of atoms and molecules in a model where the kinetic energy of the electrons is treated relativistically. The leading Thomas-Fermi energy, established in [25], as well as the correction given here...... are of semi-classical nature. Our result on atoms and molecules is proved from a general semi-classical estimate for relativistic operators with potentials with Coulomb-like singularities. This semi-classical estimate is obtained using the coherent state calculus introduced in [36]. The paper contains...

  11. Maximal heart rate does not limit cardiovascular capacity in healthy humans

    DEFF Research Database (Denmark)

    Munch, G D W; Svendsen, J H; Damsgaard, R

    2014-01-01

    In humans, maximal aerobic power (VO2 max ) is associated with a plateau in cardiac output (Q), but the mechanisms regulating the interplay between maximal heart rate (HRmax) and stroke volume (SV) are unclear. To evaluate the effect of tachycardia and elevations in HRmax on cardiovascular function...... and capacity during maximal exercise in healthy humans, 12 young male cyclists performed incremental cycling and one-legged knee-extensor exercise (KEE) to exhaustion with and without right atrial pacing to increase HR. During control cycling, Q and leg blood flow increased up to 85% of maximal workload (WLmax...... and RAP (P healthy...

  12. Power Converters Maximize Outputs Of Solar Cell Strings

    Science.gov (United States)

    Frederick, Martin E.; Jermakian, Joel B.

    1993-01-01

    Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.

  13. No Mikheyev-Smirnov-Wolfenstein Effect in Maximal Mixing

    OpenAIRE

    Harrison, P. F.; Perkins, D. H.; Scott, W. G.

    1996-01-01

    We investigate the possible influence of the MSW effect on the expectations for the solar neutrino experiments in the maximal mixing scenario suggested by the atmospheric neutrino data. A direct numerical calculation of matter induced effects in the Sun shows that the naive vacuum predictions are left completely undisturbed in the particular case of maximal mixing, so that the MSW effect turns out to be unobservable. We give a qualitative explanation of this result.

  14. Single maximal versus combination punch kinematics.

    Science.gov (United States)

    Piorkowski, Barry A; Lees, Adrian; Barton, Gabor J

    2011-03-01

    The aim of this study was to determine the influence of punch type (Jab, Cross, Lead Hook and Reverse Hook) and punch modality (Single maximal, 'In-synch' and 'Out of synch' combination) on punch speed and delivery time. Ten competition-standard volunteers performed punches with markers placed on their anatomical landmarks for 3D motion capture with an eight-camera optoelectronic system. Speed and duration between key moments were computed. There were significant differences in contact speed between punch types (F(2,18,84.87) = 105.76, p = 0.001) with Lead and Reverse Hooks developing greater speed than Jab and Cross. There were significant differences in contact speed between punch modalities (F(2,64,102.87) = 23.52, p = 0.001) with the Single maximal (M+/- SD: 9.26 +/- 2.09 m/s) higher than 'Out of synch' (7.49 +/- 2.32 m/s), 'In-synch' left (8.01 +/- 2.35 m/s) or right lead (7.97 +/- 2.53 m/s). Delivery times were significantly lower for Jab and Cross than Hook. Times were significantly lower 'In-synch' than a Single maximal or 'Out of synch' combination mode. It is concluded that a defender may have more evasion-time than previously reported. This research could be of use to performers and coaches when considering training preparations.

  15. Formation Control for the MAXIM Mission

    Science.gov (United States)

    Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.

    2004-01-01

    Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.

  16. Anticipatory phase correction in sensorimotor synchronization.

    Science.gov (United States)

    Repp, Bruno H; Moseley, Gordon P

    2012-10-01

    Studies of phase correction in sensorimotor synchronization often introduce timing perturbations that are unpredictable with regard to direction, magnitude, and position in the stimulus sequence. If participants knew any or all of these parameters in advance, would they be able to anticipate perturbations and thus regain synchrony more quickly? In Experiment 1, we asked musically trained participants to tap in synchrony with short isochronous tone sequences containing a phase shift (PS) of -100, -40, 40, or 100 ms and provided advance information about its direction, position, or both (but not about its magnitude). The first two conditions had little effect, but in the third condition participants shifted their tap in anticipation of the PS, though only by about ±40 ms on average. The phase correction response to the residual PS was also enhanced. In Experiment 2, we provided complete advance information about PSs of various magnitudes either at the time of the immediately preceding tone ("late") or at the time of the tone one position back ("early") while also varying sequence tempo. Anticipatory phase correction was generally conservative and was impeded by fast tempo in the "late" condition. At fast tempi in both conditions, advancing a tap was more difficult than delaying a tap. The results indicate that temporal constraints on anticipatory phase correction resemble those on reactive phase correction. While the latter is usually automatic, this study shows that phase correction can also be controlled consciously for anticipatory purposes. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Electromagnetic fields with vanishing quantum corrections

    Czech Academy of Sciences Publication Activity Database

    Ortaggio, Marcello; Pravda, Vojtěch

    2018-01-01

    Roč. 779, 10 April (2018), s. 393-395 ISSN 0370-2693 R&D Projects: GA ČR GA13-10042S Institutional support: RVO:67985840 Keywords : nonlinear electrodynamics * quantum corrections Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 4.807, year: 2016 https://www.sciencedirect.com/science/article/pii/S0370269318300327?via%3Dihub

  18. Electromagnetic fields with vanishing quantum corrections

    Czech Academy of Sciences Publication Activity Database

    Ortaggio, Marcello; Pravda, Vojtěch

    2018-01-01

    Roč. 779, 10 April (2018), s. 393-395 ISSN 0370-2693 R&D Projects: GA ČR GA13-10042S Institutional support: RVO:67985840 Keywords : nonlinear electrodynamics * quantum corrections Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 4.807, year: 2016 https://www. science direct.com/ science /article/pii/S0370269318300327?via%3Dihub

  19. Modern general topology

    CERN Document Server

    Nagata, J-I

    1985-01-01

    This classic work has been fundamentally revised to take account of recent developments in general topology. The first three chapters remain unchanged except for numerous minor corrections and additional exercises, but chapters IV-VII and the new chapter VIII cover the rapid changes that have occurred since 1968 when the first edition appeared.The reader will find many new topics in chapters IV-VIII, e.g. theory of Wallmann-Shanin's compactification, realcompact space, various generalizations of paracompactness, generalized metric spaces, Dugundji type extension theory, linearly ordered topolo

  20. Sum-Rate Maximization of Coordinated Direct and Relay Systems

    DEFF Research Database (Denmark)

    Sun, Fan; Popovski, Petar; Thai, Chan

    2012-01-01

    Joint processing of multiple communication flows in wireless systems has given rise to a number of novel transmission techniques, notably the two-way relaying based on wireless network coding. Recently, a related set of techniques has emerged, termed coordinated direct and relay (CDR) transmissions......, where the constellation of traffic flows is more general than the two-way. Regardless of the actual traffic flows, in a CDR scheme the relay has a central role in managing the interference and boosting the overall system performance. In this paper we investigate the novel transmission modes, based...... on amplify-and-forward, that arise when the relay is equipped with multiple antennas and can use beamforming. We focus on one representative traffic type, with one uplink and one downlink users and consider the achievable sum-rate maximization relay beamforming. The beamforming criterion leads to a non...

  1. Conversational Implicature of Peanuts Comic Strip Based on Grice’s Maxim Theory

    Directory of Open Access Journals (Sweden)

    Muhartoyo Muhartoyo

    2013-04-01

    Full Text Available This article discusses about conversational implicature that occurs in Peanuts comic strips. The objectives of this study are to find out the implied meaning in the conversation between Charlie Brown with Lucy van Pelt and Lucy van Pelt with Linus van Pelt to evaluate the existence of maxim flouting and maxim violating in those conversations in relation to the four maxims such as quantity, quality, relation, and manner. Likewise, this study attempts to find out the reason for using conversational implicature in a comic strip. The writers uses a qualitative method with library research concerning to Grice’s maxim theory to analyze the conversational implicature. Based on the analysis, it can be concluded that all the comics that comprise 14 comics generate conversational implicature since all the characters breach rules of maxim. The result of this analysis shows that flouting maxim of manner has the highest occurrence of conversational implicature and the least occurrences belong to flouting maxim of relation and violating maxim of quantity. Moreover, the writers concludes that to make a successful communication ideally the speaker and the hearer to cooperate in the conversation by saying explicitly so the hearer can grasp the meaning as the goal of communication is to deliver a message to the hearer.  

  2. Modeling the violation of reward maximization and invariance in reinforcement schedules.

    Directory of Open Access Journals (Sweden)

    Giancarlo La Camera

    2008-08-01

    Full Text Available It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect". This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these

  3. Comparison of changes in the mobility of the pelvic floor muscle on during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction.

    Science.gov (United States)

    Jung, Halim; Jung, Sangwoo; Joo, Sunghee; Song, Changho

    2016-01-01

    [Purpose] The purpose of this study was to compare changes in the mobility of the pelvic floor muscle during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. [Subjects] Thirty healthy adults participated in this study (15 men and 15 women). [Methods] All participants performed a bridge exercise and abdominal curl-up during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. Pelvic floor mobility was evaluated as the distance from the bladder base using ultrasound. [Results] According to exercise method, bridge exercise and abdominal curl-ups led to significantly different pelvic floor mobility. The pelvic floor muscle was elevated during the abdominal drawing-in maneuver and descended during maximal expiration. Finally, pelvic floor muscle mobility was greater during abdominal curl-up than during the bridge exercise. [Conclusion] According to these results, the abdominal drawing-in maneuver induced pelvic floor muscle contraction, and pelvic floor muscle contraction was greater during the abdominal curl-up than during the bridge exercise.

  4. Power corrections to the HTL effective Lagrangian of QED

    Science.gov (United States)

    Carignano, Stefano; Manuel, Cristina; Soto, Joan

    2018-05-01

    We present compact expressions for the power corrections to the hard thermal loop (HTL) Lagrangian of QED in d space dimensions. These are corrections of order (L / T) 2, valid for momenta L ≪ T, where T is the temperature. In the limit d → 3 we achieve a consistent regularization of both infrared and ultraviolet divergences, which respects the gauge symmetry of the theory. Dimensional regularization also allows us to witness subtle cancellations of infrared divergences. We also discuss how to generalize our results in the presence of a chemical potential, so as to obtain the power corrections to the hard dense loop (HDL) Lagrangian.

  5. Descriptive Analysis on Flouting and Hedging of Conversational Maxims in the “Post Grad” Movie

    Directory of Open Access Journals (Sweden)

    Nastiti Rokhmania

    2012-11-01

    Full Text Available This research is focused on analyzing flouting and hedging of conversational maxim of utterances used by the main characters in “Post Grad” movie. Conversational maxims are the rules of cooperative principle categorized into four categories; Maxim of Quality, Maxim of Quantity, Maxim of Relevance, and Maxim of Manner. If these maxims are used in conversations, the conversations can go smoothly. However, people often break the maxims overtly (flouting maxim and sometimes break the maxims secretly (hedging maxims when they make a conversation. This research is conducted using descriptive qualitative method based on the theory known as Grice’s Maxims. The data are in form of utterances used by the characters in “Post Grad” movie. The data analysis reveals some finding covering the formulated research question. The maxims are flouted when the speaker breaks some conversational maxims when using the utterances in the form of rhetorical strategies, such as tautology, metaphor, hyperbole, irony, and rhetorical question. On the other hand, conversational maxims are also hedged when the information is not totally accurate or unclearly stated but seems informative, well-founded, and relevant.

  6. Maximal and anaerobic threshold cardiorespiratory responses during deepwater running

    Directory of Open Access Journals (Sweden)

    Ana Carolina Kanitz

    2014-12-01

    Full Text Available DOI: http://dx.doi.org/10.5007/1980-0037.2015v17n1p41   Aquatic exercises provide numerous benefits to the health of their practitioners. To secure these benefits, it is essential to have proper prescriptions to the needs of each individual and, therefore, it is important to study the cardiorespiratory responses of different activities in this environment. Thus, the aim of this study was to compare the cardiorespiratory responses at the anaerobic threshold (AT between maximal deep-water running (DWR and maximal treadmill running (TMR. In addition, two methods of determining the AT (the heart rate deflection point [HRDP] and ventilatory method [VM] are compared in the two evaluated protocols. Twelve young women performed the two maximal protocols. Two-factor ANOVA for repeated measures with a post-hoc Bonferroni test was used (α < 0.05. Significantly higher values of maximal heart rate (TMR: 33.7 ± 3.9; DWR: 22.5 ± 4.1 ml.kg−1.min−1 and maximal oxygen uptake (TMR: 33.7 ± 3.9; DWR: 22.5 ± 4.1 ml.kg−1.min−1 in TMR compared to the DWR were found. Furthermore, no significant differences were found between the methods for determining the AT (TMR: VM: 28.1 ± 5.3, HRDP: 26.6 ± 5.5 ml.kg−1.min−1; DWR: VM: 18.7 ± 4.8, HRDP: 17.8 ± 4.8 ml.kg−1.min−1. The results indicate that a specific maximal test for the trained modality should be conducted and the HRDP can be used as a simple and practical method of determining the AT, based on which the training intensity can be determined

  7. Corrective Action Decision Document/Closure Report for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick

    2013-09-01

    This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 105: Area 2 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada. CAU 105 comprises the following five corrective action sites (CASs): -02-23-04 Atmospheric Test Site - Whitney Closure In Place -02-23-05 Atmospheric Test Site T-2A Closure In Place -02-23-06 Atmospheric Test Site T-2B Clean Closure -02-23-08 Atmospheric Test Site T-2 Closure In Place -02-23-09 Atmospheric Test Site - Turk Closure In Place The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 105 based on the implementation of the corrective actions. Corrective action investigation (CAI) activities were performed from October 22, 2012, through May 23, 2013, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices.

  8. Maximally flat radiation patterns of a circular aperture

    Science.gov (United States)

    Minkovich, B. M.; Mints, M. Ia.

    1989-08-01

    The paper presents an explicit solution to the problems of maximizing the area utilization coefficient and of obtaining the best approximation (on the average) of a sectorial Pi-shaped radiation pattern of an antenna with a circular aperture when Butterworth conditions are imposed on the approximating pattern with the aim of flattening it. Constraints on the choice of admissible minimum and maximum antenna dimensions are determined which make possible the synthesis of maximally flat patterns with small sidelobes.

  9. The Large Margin Mechanism for Differentially Private Maximization

    OpenAIRE

    Chaudhuri, Kamalika; Hsu, Daniel; Song, Shuang

    2014-01-01

    A basic problem in the design of privacy-preserving algorithms is the private maximization problem: the goal is to pick an item from a universe that (approximately) maximizes a data-dependent function, all under the constraint of differential privacy. This problem has been used as a sub-routine in many privacy-preserving algorithms for statistics and machine-learning. Previous algorithms for this problem are either range-dependent---i.e., their utility diminishes with the size of the universe...

  10. Emergence of spacetime dynamics in entropy corrected and braneworld models

    International Nuclear Information System (INIS)

    Sheykhi, A.; Dehghani, M.H.; Hosseini, S.E.

    2013-01-01

    A very interesting new proposal on the origin of the cosmic expansion was recently suggested by Padmanabhan [arXiv:1206.4916]. He argued that the difference between the surface degrees of freedom and the bulk degrees of freedom in a region of space drives the accelerated expansion of the universe, as well as the standard Friedmann equation through relation ΔV = Δt(N sur −N bulk ). In this paper, we first present the general expression for the number of degrees of freedom on the holographic surface, N sur , using the general entropy corrected formula S = A/(4L p 2 )+s(A). Then, as two example, by applying the Padmanabhan's idea we extract the corresponding Friedmann equations in the presence of power-law and logarithmic correction terms in the entropy. We also extend the study to RS II and DGP braneworld models and derive successfully the correct form of the Friedmann equations in these theories. Our study further supports the viability of Padmanabhan's proposal

  11. Enumerating all maximal frequent subtrees in collections of phylogenetic trees

    Science.gov (United States)

    2014-01-01

    Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474

  12. NLO corrections to production of heavy particles at hadron colliders

    International Nuclear Information System (INIS)

    Pagani, Davide

    2013-01-01

    In this thesis we study specific aspects of the production of heavy particles at hadron colliders, with emphasis on precision predictions including next-to-leading order (NLO) corrections from the strong and electroweak interactions. In the first part of the thesis we consider the top quark charge asymmetry. In particular, we discuss in detail the calculation of the electroweak contributions from the asymmetric part of the top quark pair production cross section at O(α 2 s α) and O(α 2 ) and their numerical impact on predictions for the asymmetry measurements at the Tevatron. These electroweak contributions provide a non-negligible addition to the QCD-induced asymmetry with the same overall sign and, in general, enlarge the Standard Model predictions by a factor around 1.2, diminishing the deviations from experimental measurements. In the second part of the thesis we consider the production of squarks, the supersymmetric partners of quarks, at the Large Hadron Collider (LHC). We discuss the calculation of the contribution of factorizable NLO QCD corrections to the production of squark-squark pairs combined at fully differential level with squark decays. Combining the production process with two different configurations for the squark decays, our calculation is used to provide precise phenomenological predictions for two different experimental signatures that are important for the search of supersymmetry at the LHC. We focus, for one signature, on the impact of our results on important physical differential distributions and on cut-and-count searches performed by the ATLAS and CMS collaborations. Considering the other signature, we analyze the effects from NLO QCD corrections and from the combination of production and decays on distributions relevant for parameter determination. In general, factorizable NLO QCD corrections have to be taken into account to obtain precise phenomenological predictions for the analyzed distributions and inclusive quantities. Moreover

  13. Pace's Maxims for Homegrown Library Projects. Coming Full Circle

    Science.gov (United States)

    Pace, Andrew K.

    2005-01-01

    This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…

  14. Compton scatter and randoms corrections for origin ensembles 3D PET reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Sitek, Arkadiusz [Harvard Medical School, Boston, MA (United States). Dept. of Radiology; Brigham and Women' s Hospital, Boston, MA (United States); Kadrmas, Dan J. [Utah Univ., Salt Lake City, UT (United States). Utah Center for Advanced Imaging Research (UCAIR)

    2011-07-01

    In this work we develop a novel approach to correction for scatter and randoms in reconstruction of data acquired by 3D positron emission tomography (PET) applicable to tomographic reconstruction done by the origin ensemble (OE) approach. The statistical image reconstruction using OE is based on calculation of expectations of the numbers of emitted events per voxel based on complete-data space. Since the OE estimation is fundamentally different than regular statistical estimators such those based on the maximum likelihoods, the standard methods of implementation of scatter and randoms corrections cannot be used. Based on prompts, scatter, and random rates, each detected event is graded in terms of a probability of being a true event. These grades are utilized by the Markov Chain Monte Carlo (MCMC) algorithm used in OE approach for calculation of the expectation over the complete-data space of the number of emitted events per voxel (OE estimator). We show that the results obtained with the OE are almost identical to results obtained by the maximum likelihood-expectation maximization (ML-EM) algorithm for reconstruction for experimental phantom data acquired using Siemens Biograph mCT 3D PET/CT scanner. The developed correction removes artifacts due to scatter and randoms in investigated 3D PET datasets. (orig.)

  15. Gravitational collapse of charged dust shell and maximal slicing condition

    International Nuclear Information System (INIS)

    Maeda, Keiichi

    1980-01-01

    The maximal slicing condition is a good time coordinate condition qualitatively when pursuing the gravitational collapse by the numerical calculation. The analytic solution of the gravitational collapse under the maximal slicing condition is given in the case of a spherical charged dust shell and the behavior of time slices with this coordinate condition is investigated. It is concluded that under the maximal slicing condition we can pursue the gravitational collapse until the radius of the shell decreases to about 0.7 x (the radius of the event horizon). (author)

  16. On the Physical Significance of Infra-red Corrections to Inflationary Observables

    CERN Document Server

    Bartolo, N; Pietroni, M; Riotto, Antonio; Seery, D

    2008-01-01

    Inflationary observables, like the power spectrum, computed at one- and higher-order loop level seem to be plagued by large infra-red corrections. In this short note, we point out that these large infra-red corrections appear only in quantities which are not directly observable. This is in agreement with general expectations concerning infra-red effects.

  17. Generalized IIB supergravity from exceptional field theory

    Energy Technology Data Exchange (ETDEWEB)

    Baguet, Arnaud; Magro, Marc; Samtleben, Henning [Laboratoire de Physique, Université Claude Bernard Lyon 1, Ens de Lyon, CNRS,F-69342 Lyon (France)

    2017-03-20

    The background underlying the η-deformed AdS{sub 5}×S{sup 5} sigma-model is known to satisfy a generalization of the IIB supergravity equations. Their solutions are related by T-duality to solutions of type IIA supergravity with non-isometric linear dilaton. We show how the generalized IIB supergravity equations can be naturally obtained from exceptional field theory. Within this manifestly duality covariant formulation of maximal supergravity, the generalized IIB supergravity equations emerge upon imposing on the fields a simple Scherk-Schwarz ansatz which respects the section constraint.

  18. Deconstructing facts and frames in energy research: Maxims for evaluating contentious problems

    International Nuclear Information System (INIS)

    Sovacool, Benjamin K.; Brown, Marilyn A.

    2015-01-01

    In this article, we argue that assumptions and values can play a combative, corrosive role in the generation of objective energy analysis. We then propose six maxims for energy analysts and researchers. Our maxim of information asks readers to keep up to date on trends in energy resources and technology. Our maxim of inclusivity asks readers to involve citizens and other public actors more in energy decisions. Our maxim of symmetry asks readers to keep their analysis of energy technologies centered always on both technology and society. Our maxim of reflexivity asks readers to be self-aware of one's assumptions. Our maxim of prudence asks readers to make energy decisions that are ethical or at least informed. Our maxim of agnosticism asks readers to look beyond a given energy technology to the services it provides and recognize that many systems can provide a desired service. We conclude that decisions in energy are justified by, if not predicated on, beliefs—beliefs which may or may not be supported by objective data, constantly blurring the line between fact, fiction, and frames. - Highlights: • Assumptions and values can play a combative, corrosive role in the generation of objective energy analysis. • Decisions in energy are justified by, if not predicated on, beliefs. • We propose six maxims for energy analysts and researcher.

  19. The relativistic Scott correction for atoms and molecules

    DEFF Research Database (Denmark)

    Solovej, Jan Philip; Sørensen, Thomas Østergaard; Spitzer, Wolfgang L.

    2010-01-01

    We prove the first correction to the leading Thomas-Fermi energy for the ground state energy of atoms and molecules in a model where the kinetic energy of the electrons is treated relativistically. The leading Thomas-Fermi energy, established in [25], as well as the correction given here......, are of semiclassical nature. Our result on atoms and molecules is proved from a general semiclassical estimate for relativistic operators with potentials with Coulomb-like singularities. This semiclassical estimate is obtained using the coherent state calculus introduced in [36]. The paper contains a unified treatment...

  20. Maximal isometric strength of the cervical musculature in 100 healthy volunteers

    DEFF Research Database (Denmark)

    Jordan, A; Mehlsen, J; Bülow, P M

    1999-01-01

    A descriptive study involving maximal isometric strength measurements of the cervical musculature.......A descriptive study involving maximal isometric strength measurements of the cervical musculature....

  1. Generalized Einstein-Aether theories and the Solar System

    International Nuclear Information System (INIS)

    Bonvin, Camille; Durrer, Ruth; Ferreira, Pedro G.; Zlosnik, Tom G.; Starkman, Glenn

    2008-01-01

    It has been shown that generalized Einstein-Aether theories may lead to significant modifications to the nonrelativistic limit of the Einstein equations. In this paper we study the effect of a general class of such theories on the Solar System. We consider corrections to the gravitational potential in negative and positive powers of distance from the source. Using measurements of the perihelion shift of Mercury and time delay of radar signals to Cassini, we place constraints on these corrections. We find that a subclass of generalized Einstein-Aether theories is compatible with these constraints

  2. Boundedness of Stein’s spherical maximal function in variable Lebesgue spaces and application to the wave equation

    Czech Academy of Sciences Publication Activity Database

    Fiorenza, A.; Gogatishvili, Amiran; Kopaliani, T.

    2013-01-01

    Roč. 100, č. 5 (2013), s. 465-472 ISSN 0003-889X R&D Projects: GA ČR GA201/08/0383; GA ČR GA13-14743S Institutional support: RVO:67985840 Keywords : spherical maximal function * variable Lebesque spaces * boundedness result Subject RIV: BA - General Mathematics Impact factor: 0.479, year: 2013 http://link.springer.com/article/10.1007/s00013-013-0509-0

  3. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  4. Nonadditive entropy maximization is inconsistent with Bayesian updating

    Science.gov (United States)

    Pressé, Steve

    2014-11-01

    The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  5. Les Ambivalences du Silence: Les "Maximes" de la Rochefoucauld Par Quatre Chemins

    Science.gov (United States)

    Turcat, Eric

    2012-01-01

    Maxims are famous for their moral pronouncements, yet La Rochefoucauld's "Maximes" (1678) have become infamous for offering little moral guidance. Morally ambivalent at best, the "Maximes" are also less known for their other forms of ambivalence, whether rhetorical, psychological, anthropological or linguistic. Such are…

  6. 76 FR 50714 - Federal Acquisition Regulation; Documenting Contractor Performance; Correction

    Science.gov (United States)

    2011-08-16

    ... Regulation; Documenting Contractor Performance; Correction AGENCY: Department of Defense (DoD), General... August 9, 2011, regarding the proposed rule for Documenting Contractor Performance. DATES: The comment...

  7. FLOUTS OF THE COOPERATIVE PRINCIPLE MAXIMS IN SBY’S PRESIDENTIAL INTERVIEWS

    Directory of Open Access Journals (Sweden)

    Fahrus Zaman Fadhly

    2012-12-01

    Full Text Available This paper analyzed the presidential interviews of the President of Republic of Indonesia, Susilo Bambang Yudoyono (SBY, based on Grice’s theory of the Cooperative Principles (CP. This study employed a qualitative research design and the data were three transcripts of interview discourse between SBY and eight Indonesian journalists obtained through the presidential official website: http://www.presidentsby.info. The research investigated the ways of SBY in flouting the CP maxims in his presidential interviews and the functions of the flouts were. The research revealed that SBY flouted all the CP maxims and the maxim of Quantity was frequently flouted. Meanwhile, there were four ways used by SBY in flouting the CP maxims, i.e. hedging, indirectness, open answer and detailed element. The function of the flouts, i.e. face saving acts (FSA, self-protection, awareness, politeness, interestingness, control of information, elaboration and ignorance. This research also revealed that CP maxims of Grice are not universal.

  8. Polynomial algorithms for the Maximal Pairing Problem: efficient phylogenetic targeting on arbitrary trees

    Directory of Open Access Journals (Sweden)

    Stadler Peter F

    2010-06-01

    Full Text Available Abstract Background The Maximal Pairing Problem (MPP is the prototype of a class of combinatorial optimization problems that are of considerable interest in bioinformatics: Given an arbitrary phylogenetic tree T and weights ωxy for the paths between any two pairs of leaves (x, y, what is the collection of edge-disjoint paths between pairs of leaves that maximizes the total weight? Special cases of the MPP for binary trees and equal weights have been described previously; algorithms to solve the general MPP are still missing, however. Results We describe a relatively simple dynamic programming algorithm for the special case of binary trees. We then show that the general case of multifurcating trees can be treated by interleaving solutions to certain auxiliary Maximum Weighted Matching problems with an extension of this dynamic programming approach, resulting in an overall polynomial-time solution of complexity (n4 log n w.r.t. the number n of leaves. The source code of a C implementation can be obtained under the GNU Public License from http://www.bioinf.uni-leipzig.de/Software/Targeting. For binary trees, we furthermore discuss several constrained variants of the MPP as well as a partition function approach to the probabilistic version of the MPP. Conclusions The algorithms introduced here make it possible to solve the MPP also for large trees with high-degree vertices. This has practical relevance in the field of comparative phylogenetics and, for example, in the context of phylogenetic targeting, i.e., data collection with resource limitations.

  9. Evaluation of anti-hyperglycemic effect of Actinidia kolomikta (Maxim. etRur.) Maxim. root extract.

    Science.gov (United States)

    Hu, Xuansheng; Cheng, Delin; Wang, Linbo; Li, Shuhong; Wang, Yuepeng; Li, Kejuan; Yang, Yingnan; Zhang, Zhenya

    2015-05-01

    This study aimed to evaluate the anti-hyperglycemic effect of ethanol extract from Actinidia kolomikta (Maxim. etRur.) Maxim. root (AKE).An in vitro evaluation was performed by using rat intestinal α-glucosidase (maltase and sucrase), the key enzymes linked with type 2 diabetes. And an in vivo evaluation was also performed by loading maltose, sucrose, glucose to normal rats. As a result, AKE showed concentration-dependent inhibition effects on rat intestinal maltase and rat intestinal sucrase with IC(50) values of 1.83 and 1.03mg/mL, respectively. In normal rats, after loaded with maltose, sucrose and glucose, administration of AKE significantly reduced postprandial hyperglycemia, which is similar to acarbose used as an anti-diabetic drug. High contents of total phenolics (80.49 ± 0.05mg GAE/g extract) and total flavonoids (430.69 ± 0.91mg RE/g extract) were detected in AKE. In conclusion, AKE possessed anti-hyperglycemic effects and the possible mechanisms were associated with its inhibition on α-glucosidase and the improvement on insulin release and/or insulin sensitivity as well. The anti-hyperglycemic activity possessed by AKE maybe attributable to its high contents of phenolic and flavonoid compounds.

  10. Maximally efficient protocols for direct secure quantum communication

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Anindita [Department of Physics and Materials Science Engineering, Jaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307 (India); Department of Physics and Center for Astroparticle Physics and Space Science, Bose Institute, Block EN, Sector V, Kolkata 700091 (India); Pathak, Anirban, E-mail: anirban.pathak@jiit.ac.in [Department of Physics and Materials Science Engineering, Jaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307 (India); RCPTM, Joint Laboratory of Optics of Palacky University and Institute of Physics of Academy of Science of the Czech Republic, Faculty of Science, Palacky University, 17. Listopadu 12, 77146 Olomouc (Czech Republic)

    2012-10-01

    Two protocols for deterministic secure quantum communication (DSQC) using GHZ-like states have been proposed. It is shown that one of these protocols is maximally efficient and that can be modified to an equivalent protocol of quantum secure direct communication (QSDC). Security and efficiency of the proposed protocols are analyzed and compared. It is shown that dense coding is sufficient but not essential for DSQC and QSDC protocols. Maximally efficient QSDC protocols are shown to be more efficient than their DSQC counterparts. This additional efficiency arises at the cost of message transmission rate. -- Highlights: ► Two protocols for deterministic secure quantum communication (DSQC) are proposed. ► One of the above protocols is maximally efficient. ► It is modified to an equivalent protocol of quantum secure direct communication (QSDC). ► It is shown that dense coding is sufficient but not essential for DSQC and QSDC protocols. ► Efficient QSDC protocols are always more efficient than their DSQC counterparts.

  11. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  12. CORRECTIVE ACTION IN CAR MANUFACTURING

    Directory of Open Access Journals (Sweden)

    H. Rohne

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this paper the important .issues involved in successfully implementing corrective action systems in quality management are discussed. The work is based on experience in implementing and operating such a system in an automotive manufacturing enterprise in South Africa. The core of a corrective action system is good documentation, supported by a computerised information system. Secondly, a systematic problem solving methodology is essential to resolve the quality related problems identified by the system. In the following paragraphs the general corrective action process is discussed and the elements of a corrective action system are identified, followed by a more detailed discussion of each element. Finally specific results from the application are discussed.

    AFRIKAANSE OPSOMMING: Belangrike oorwegings by die suksesvolle implementering van korrektiewe aksie stelsels in gehaltebestuur word in hierdie artikel bespreek. Die werk is gebaseer op ondervinding in die implementering en bedryf van so 'n stelsel by 'n motorvervaardiger in Suid Afrika. Die kern van 'n korrektiewe aksie stelsel is goeie dokumentering, gesteun deur 'n gerekenariseerde inligtingstelsel. Tweedens is 'n sistematiese probleemoplossings rnetodologie nodig om die gehalte verwante probleme wat die stelsel identifiseer aan te spreek. In die volgende paragrawe word die algemene korrektiewe aksie proses bespreek en die elemente van die korrektiewe aksie stelsel geidentifiseer. Elke element word dan in meer besonderhede bespreek. Ten slotte word spesifieke resultate van die toepassing kortliks behandel.

  13. Generalized Entanglement Entropies of Quantum Designs

    Science.gov (United States)

    Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun

    2018-03-01

    The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.

  14. Mutually Unbiased Maximally Entangled Bases for the Bipartite System Cd⊗ C^{dk}

    Science.gov (United States)

    Nan, Hua; Tao, Yuan-Hong; Wang, Tian-Jiao; Zhang, Jun

    2016-10-01

    The construction of maximally entangled bases for the bipartite system Cd⊗ Cd is discussed firstly, and some mutually unbiased bases with maximally entangled bases are given, where 2≤ d≤5. Moreover, we study a systematic way of constructing mutually unbiased maximally entangled bases for the bipartite system Cd⊗ C^{dk}.

  15. Maximal near-field radiative heat transfer between two plates

    Science.gov (United States)

    Nefzaoui, Elyes; Ezzahri, Younès; Drévillon, Jérémie; Joulain, Karl

    2013-09-01

    Near-field radiative transfer is a promising way to significantly and simultaneously enhance both thermo-photovoltaic (TPV) devices power densities and efficiencies. A parametric study of Drude and Lorentz models performances in maximizing near-field radiative heat transfer between two semi-infinite planes separated by nanometric distances at room temperature is presented in this paper. Optimal parameters of these models that provide optical properties maximizing the radiative heat flux are reported and compared to real materials usually considered in similar studies, silicon carbide and heavily doped silicon in this case. Results are obtained by exact and approximate (in the extreme near-field regime and the electrostatic limit hypothesis) calculations. The two methods are compared in terms of accuracy and CPU resources consumption. Their differences are explained according to a mesoscopic description of nearfield radiative heat transfer. Finally, the frequently assumed hypothesis which states a maximal radiative heat transfer when the two semi-infinite planes are of identical materials is numerically confirmed. Its subsequent practical constraints are then discussed. Presented results enlighten relevant paths to follow in order to choose or design materials maximizing nano-TPV devices performances.

  16. Self-force correction to geodetic spin precession in Kerr spacetime

    Science.gov (United States)

    Akcay, Sarp

    2017-08-01

    We present an expression for the gravitational self-force correction to the geodetic spin precession of a spinning compact object with small, but non-negligible mass in a bound, equatorial orbit around a Kerr black hole. We consider only conservative backreaction effects due to the mass of the compact object (m1), thus neglecting the effects of its spin s1 on its motion; i.e., we impose s1≪G m12/c and m1≪m2, where m2 is the mass parameter of the background Kerr spacetime. We encapsulate the correction to the spin precession in ψ , the ratio of the accumulated spin-precession angle to the total azimuthal angle over one radial orbit in the equatorial plane. Our formulation considers the gauge-invariant O (m1) part of the correction to ψ , denoted by Δ ψ , and is a generalization of the results of Akcay et al. [Classical Quantum Gravity 34, 084001 (2017), 10.1088/1361-6382/aa61d6] to Kerr spacetime. Additionally, we compute the zero-eccentricity limit of Δ ψ and show that this quantity differs from the circular orbit Δ ψcirc by a gauge-invariant quantity containing the gravitational self-force correction to general relativistic periapsis advance in Kerr spacetime. Our result for Δ ψ is expressed in a manner that readily accommodates numerical/analytical self-force computations, e.g., in the radiation gauge, and paves the way for the computation of a new eccentric-orbit Kerr gauge invariant beyond the generalized redshift.

  17. 77 FR 14016 - General Services Administration Acquisition Regulation; Preparation, Submission, and Negotiation...

    Science.gov (United States)

    2012-03-08

    ..., Submission, and Negotiation of Subcontracting Plans; Correction AGENCY: General Services Administration (GSA..., Preparation, Submission, and Negotiation of Subcontracting Plans; Correction. Correction In the information...

  18. Assessment of knowledge of general practitioners about nuclear medicine

    International Nuclear Information System (INIS)

    Zakavi, R.; Derakhshan, A.; Pourzadeh, Z.

    2002-01-01

    Nuclear medicine is an important department in most of scientific hospitals in the world. Rapid improvement in the filed of nuclear medicine needs continuing education of medical students. We tried to evaluate the knowledge of general practitioners in the flied of nuclear medicine, hoping that this study help mangers in accurate planning of teaching programs. Methods and materials: We prepared a questionnaire with 14 questions regarding applications of nuclear medicine techniques in different specialities of medicine. We selected questions as simple as possible with considering the most common techniques and best imaging modality in some disease. One question in nuclear cardiology, one in lung disease, two questions in thyroid therapy, another two in gastrointestinal system, two in genitourinary system and the last two in nuclear oncology. Also 4 questions were about general aspects of nuclear medicine. We have another 4 questions regarding the necessity of having a nuclear medicine subject during medical study, the best method of teaching of nuclear medicine and the preferred method of continuing education. Also age, sex, graduation date and university of education of all subjects were recorded. Results: One hundred (General practitioners) were studied. including, 58 male and 42 female with age range of 27-45 years did . About 60% of cases were 27-30 years old and 40 cases were older than 40. Seventy two cases were graduated in the last 5 years. Mashad University was the main university of education 52 cases with Tehran University (16 cases) and Tabriz University (6 cases) in the next ranks. Also 26 cases were graduated from other universities. From four questions in the field of general nuclear nedione 27% were correctly answered to all questions, 37% correctly answered two questions and 10% had correct answered only one question. No correct answer was noted in 26% . correct answer was noted in 80% the held of nuclear cardiology and in 72% in the field of lung

  19. From a Proven Correct Microkernel to Trustworthy Large Systems

    Science.gov (United States)

    Andronick, June

    The seL4 microkernel was the world's first general-purpose operating system kernel with a formal, machine-checked proof of correctness. The next big step in the challenge of building truly trustworthy systems is to provide a framework for developing secure systems on top of seL4. This paper first gives an overview of seL4's correctness proof, together with its main implications and assumptions, and then describes our approach to provide formal security guarantees for large, complex systems.

  20. Maximizing noise energy for noise-masking studies.

    Science.gov (United States)

    Jules Étienne, Cédric; Arleo, Angelo; Allard, Rémy

    2017-08-01

    Noise-masking experiments are widely used to investigate visual functions. To be useful, noise generally needs to be strong enough to noticeably impair performance, but under some conditions, noise does not impair performance even when its contrast approaches the maximal displayable limit of 100 %. To extend the usefulness of noise-masking paradigms over a wider range of conditions, the present study developed a noise with great masking strength. There are two typical ways of increasing masking strength without exceeding the limited contrast range: use binary noise instead of Gaussian noise or filter out frequencies that are not relevant to the task (i.e., which can be removed without affecting performance). The present study combined these two approaches to further increase masking strength. We show that binarizing the noise after the filtering process substantially increases the energy at frequencies within the pass-band of the filter given equated total contrast ranges. A validation experiment showed that similar performances were obtained using binarized-filtered noise and filtered noise (given equated noise energy at the frequencies within the pass-band) suggesting that the binarization operation, which substantially reduced the contrast range, had no significant impact on performance. We conclude that binarized-filtered noise (and more generally, truncated-filtered noise) can substantially increase the energy of the noise at frequencies within the pass-band. Thus, given a limited contrast range, binarized-filtered noise can display higher energy levels than Gaussian noise and thereby widen the range of conditions over which noise-masking paradigms can be useful.

  1. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Science.gov (United States)

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  2. Maximally entangled mixed states of two atoms trapped inside an optical cavity

    International Nuclear Information System (INIS)

    Li Shangbin; Xu Jingbo

    2009-01-01

    In some off-resonant cases, the reduced density matrix of two atoms symmetrically coupled with an optical cavity can very approximately approach maximally entangled mixed states or maximal Bell violation mixed states in their evolution. The influence of a phase decoherence on the generation of a maximally entangled mixed state is also discussed

  3. The maximal kinematical invariance group of fluid dynamics and explosion-implosion duality

    International Nuclear Information System (INIS)

    O'Raifeartaigh, L.; Sreedhar, V.V.

    2001-01-01

    It has recently been found that supernova explosions can be simulated in the laboratory by implosions induced in a plasma by intense lasers. A theoretical explanation is that the inversion transformation, (Σ:t→-1/t, x→x/t), leaves the Euler equations of fluid dynamics, with standard polytropic exponent, invariant. This implies that the kinematical invariance group of the Euler equations is larger than the Galilei group. In this paper we determine, in a systematic manner, the maximal invariance group G of general fluid dynamics and show that it is a semi-direct product G=SL(2, R) three G, where the SL(2, R) group contains the time-translations, dilations, and the inversion Σ, and G is the static (nine-parameter) Galilei group. A subtle aspect of the inclusion of viscosity fields is discussed and it is shown that the Navier-Stokes assumption of constant viscosity breaks the SL(2, R) group to a two-parameter group of time translations and dilations in a tensorial way. The 12-parameter group G is also known to be the maximal invariance group of the free Schroedinger equation. It originates in the free Hamilton-Jacobi equation which is central to both fluid dynamics and the Schroedinger equation

  4. Adding risks: Some general results about time diversification

    NARCIS (Netherlands)

    Zou, L.; Kin, L.

    2000-01-01

    We show in general that risky investments become more attractive asthe investment horizon (n) lengthens.Specifically, any investor's maximal expected utility directlyincreases with n, as well as the investor's willingness toallocate more capital to the risky assets if his optimal strategy isbounded

  5. On the maximal cut of Feynman integrals and the solution of their differential equations

    Directory of Open Access Journals (Sweden)

    Amedeo Primo

    2017-03-01

    Full Text Available The standard procedure for computing scalar multi-loop Feynman integrals consists in reducing them to a basis of so-called master integrals, derive differential equations in the external invariants satisfied by the latter and, finally, try to solve them as a Laurent series in ϵ=(4−d/2, where d are the space–time dimensions. The differential equations are, in general, coupled and can be solved using Euler's variation of constants, provided that a set of homogeneous solutions is known. Given an arbitrary differential equation of order higher than one, there exists no general method for finding its homogeneous solutions. In this paper we show that the maximal cut of the integrals under consideration provides one set of homogeneous solutions, simplifying substantially the solution of the differential equations.

  6. IIB solutions with N>28 Killing spinors are maximally supersymmetric

    International Nuclear Information System (INIS)

    Gran, U.; Gutowski, J.; Papadopoulos, G.; Roest, D.

    2007-01-01

    We show that all IIB supergravity backgrounds which admit more than 28 Killing spinors are maximally supersymmetric. In particular, we find that for all N>28 backgrounds the supercovariant curvature vanishes, and that the quotients of maximally supersymmetric backgrounds either preserve all 32 or N<29 supersymmetries

  7. Correction of Cardy–Verlinde formula for Fermions and Bosons with modified dispersion relation

    Energy Technology Data Exchange (ETDEWEB)

    Sadatian, S. Davood, E-mail: sd-sadatian@um.ac.ir; Dareyni, H.

    2017-05-15

    Cardy–Verlinde formula links the entropy of conformal symmetry field to the total energy and its Casimir energy in a D-dimensional space. To correct black hole thermodynamics, modified dispersion relation can be used which is proposed as a general feature of quantum gravity approaches. In this paper, the thermodynamics of Schwarzschild four-dimensional black hole is corrected using the modified dispersion relation for Fermions and Bosons. Finally, using modified thermodynamics of Schwarzschild four-dimensional black hole, generalization for Cardy–Verlinde formula is obtained. - Highlights: • The modified Cardy–Verlinde formula obtained using MDR for Fermions and Bosons. • The modified entropy of the black hole used to correct the Cardy–Verlinde formula. • The modified entropy of the CFT has been obtained.

  8. Correction of patient motion in cone-beam CT using 3D-2D registration

    Science.gov (United States)

    Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.

    2017-12-01

    Cone-beam CT (CBCT) is increasingly common in guidance of interventional procedures, but can be subject to artifacts arising from patient motion during fairly long (~5-60 s) scan times. We present a fiducial-free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in the intrinsic and extrinsic parameters of geometric calibration. The 3D-2D registration process registers each projection to a prior 3D image by maximizing gradient orientation using the covariance matrix adaptation-evolution strategy optimizer. The resulting rigid transforms are applied to the system projection matrices, and a 3D image is reconstructed via model-based iterative reconstruction. Phantom experiments were conducted using a Zeego robotic C-arm to image a head phantom undergoing 5-15 cm translations and 5-15° rotations. To further test the algorithm, clinical images were acquired with a CBCT head scanner in which long scan times were susceptible to significant patient motion. CBCT images were reconstructed using a penalized likelihood objective function. For phantom studies the structural similarity (SSIM) between motion-free and motion-corrected images was  >0.995, with significant improvement (p  values of uncorrected images. Additionally, motion-corrected images exhibited a point-spread function with full-width at half maximum comparable to that of the motion-free reference image. Qualitative comparison of the motion-corrupted and motion-corrected clinical images demonstrated a significant improvement in image quality after motion correction. This indicates that the 3D-2D registration method could provide a useful approach to motion artifact correction under assumptions of local rigidity, as in the head, pelvis, and extremities. The method is highly parallelizable, and the automatic correction of residual geometric calibration errors provides added benefit that could be valuable in routine use.

  9. Non-compact random generalized games and random quasi-variational inequalities

    OpenAIRE

    Yuan, Xian-Zhi

    1994-01-01

    In this paper, existence theorems of random maximal elements, random equilibria for the random one-person game and random generalized game with a countable number of players are given as applications of random fixed point theorems. By employing existence theorems of random generalized games, we deduce the existence of solutions for non-compact random quasi-variational inequalities. These in turn are used to establish several existence theorems of noncompact generalized random ...

  10. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    Science.gov (United States)

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  11. Hadron mass corrections in semi-inclusive deep inelastic scattering

    International Nuclear Information System (INIS)

    Accardi, A.; Hobbs, T.; Melnitchouk, W.

    2009-01-01

    We derive mass corrections for semi-inclusive deep inelastic scattering of leptons from nucleons using a collinear factorization framework which incorporates the initial state mass of the target nucleon and the final state mass of the produced hadron h. The hadron mass correction is made by introducing a generalized, finite-Q 2 scaling variable ζ h for the hadron fragmentation function, which approaches the usual energy fraction z h = E h /ν in the Bjorken limit. We systematically examine the kinematic dependencies of the mass corrections to semi-inclusive cross sections, and find that these are even larger than for inclusive structure functions. The hadron mass corrections compete with the experimental uncertainties at kinematics typical of current facilities, Q 2 2 and intermediate x B > 0.3, and will be important to efforts at extracting parton distributions from semi-inclusive processes at intermediate energies.

  12. Correct Use of Three-Point Seatbelt by Pregnant Occupants

    Directory of Open Access Journals (Sweden)

    B. Serpil Acar

    2017-12-01

    Full Text Available The largest cause of accidental death and placental abruption in pregnancy is automobile collisions. Lives can be saved by correct use of the three-point seatbelt during pregnancy. Human interaction is essential for correct use of seatbelts. The objective of this study is to investigate pregnant women’s use of correct shoulder section together with correct lap section as advised by obstetricians and highway experts and to identify the most common seatbelt misuse during pregnancy. An international web survey was conducted in five languages for this study. 1931 pregnant women reported their use of seatbelts and how they position the shoulder and lap sections of their seatbelts. Special attention was paid to distinguish between ‘partly correct’ and ‘correct’ seatbelt positioning. The questionnaire responses are used to determine the magnitude of every combination of the correct and incorrect shoulder and lap section of the seatbelt positioning during pregnancy. Results show that seatbelt usage in pregnancy is generally high in the world. However, the correct use of the entire seatbelt is very low, at only 4.3% of all respondents. 40.8% of the respondents use the shoulder portion of the belt correctly, whilst a 13.2% use the lap section correctly. The most common misuse is ‘across abdomen’ or ‘not using the seatbelt at all’, and both pose danger to pregnant women and their fetuses. Correct use of three point seatbelts is a challenge during pregnancy. We recommend that the media, medical community, and automotive industry provide targeted information about correct seatbelt use during pregnancy and accident databases include ‘correct seatbelt use’ information in crash statistics.

  13. Maximal near-field radiative heat transfer between two plates

    OpenAIRE

    Nefzaoui, Elyes; Ezzahri, Younès; Drevillon, Jérémie; Joulain, Karl

    2013-01-01

    International audience; Near-field radiative transfer is a promising way to significantly and simultaneously enhance both thermo-photovoltaic (TPV) devices power densities and efficiencies. A parametric study of Drude and Lorentz models performances in maximizing near-field radiative heat transfer between two semi-infinite planes separated by nanometric distances at room temperature is presented in this paper. Optimal parameters of these models that provide optical properties maximizing the r...

  14. Endogenous Generalized Weights under DEA Control

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    Non-parametric efficiency analysis, such as Data Envelopment Analysis (DEA) relies so far on endogenous local or exogenous general weights, based on revealed preferences or market prices. However, as DEA is gaining popularity in regulation and normative budgeting, the strategic interest...... of the evaluated industry calls for attention. We offer endogenous general prices based on a reformulation of DEA where the units collectively propose the set of weights that maximize their efficiency. Thus, the sector-wide efficiency is then a result of compromising the scores of more specialized smaller units...

  15. Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network

    Science.gov (United States)

    Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao

    2018-03-01

    Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.

  16. Corrected entropy of Friedmann-Robertson-Walker universe in tunneling method

    International Nuclear Information System (INIS)

    Zhu, Tao; Ren, Ji-Rong; Li, Ming-Fan

    2009-01-01

    In this paper, we study the thermodynamic quantities of Friedmann-Robertson-Walker (FRW) universe by using the tunneling formalism beyond semiclassical approximation developed by Banerjee and Majhi [25]. For this we first calculate the corrected Hawking-like temperature on apparent horizon by considering both scalar particle and fermion tunneling. With this corrected Hawking-like temperature, the explicit expressions of the corrected entropy of apparent horizon for various gravity theories including Einstein gravity, Gauss-Bonnet gravity, Lovelock gravity, f(R) gravity and scalar-tensor gravity, are computed. Our results show that the corrected entropy formula for different gravity theories can be written into a general expression (4.39) of a same form. It is also shown that this expression is also valid for black holes. This might imply that the expression for the corrected entropy derived from tunneling method is independent of gravity theory, spacetime and dimension of the spacetime. Moreover, it is concluded that the basic thermodynamical property that the corrected entropy on apparent horizon is a state function is satisfied by the FRW universe

  17. Uncountably many maximizing measures for a dense subset of continuous functions

    Science.gov (United States)

    Shinoda, Mao

    2018-05-01

    Ergodic optimization aims to single out dynamically invariant Borel probability measures which maximize the integral of a given ‘performance’ function. For a continuous self-map of a compact metric space and a dense set of continuous functions, we show the existence of uncountably many ergodic maximizing measures. We also show that, for a topologically mixing subshift of finite type and a dense set of continuous functions there exist uncountably many ergodic maximizing measures with full support and positive entropy.

  18. Hepatitis C and the correctional population.

    Science.gov (United States)

    Reindollar, R W

    1999-12-27

    The hepatitis C epidemic has extended well into the correctional population where individuals predominantly originate from high-risk environments and have high-risk behaviors. Epidemiologic data estimate that 30% to 40% of the 1.8 million inmates in the United States are infected with the hepatitis C virus (HCV), the majority of whom were infected before incarceration. As in the general population, injection drug use accounts for the majority of HCV infections in this group--one to two thirds of inmates have a history of injection drug use before incarceration and continue to do so while in prison. Although correctional facilities also represent a high-risk environment for HCV infection because of a continued high incidence of drug use and high-risk sexual activities, available data indicate a low HCV seroconversion rate of 1.1 per 100 person-years in prison. Moreover, a high annual turnover rate means that many inmates return to their previous high-risk environments and behaviors that are conducive either to acquiring or spreading HCV. Despite a very high prevalence of HCV infection within the US correctional system, identification and treatment of at-risk individuals is inconsistent, at best. Variable access to correctional health-care resources, limited funding, high inmate turnover rates, and deficient follow-up care after release represent a few of the factors that confound HCV control and prevention in this group. Future efforts must focus on establishing an accurate knowledge base and implementing education, policies, and procedures for the prevention and treatment of hepatitis C in correctional populations.

  19. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  20. Effect of Maximal Versus Supra-Maximal Exhausting Race on Lipid Peroxidation, Antioxidant Activity and Muscle-Damage Biomarkers in Long-Distance and Middle-Distance Runners.

    Science.gov (United States)

    Mohamed, Said; Lamya, Ncir; Hamda, Mansour

    2016-03-01

    Exhausting physical exercise increases lipid peroxidation and causes important muscle damages. The human body tries to mitigate these adverse effects by mobilizing its antioxidant defenses. This study aims to investigate the effect of a maximal versus supra-maximal race sustained until exhaustion on lipid peroxidation, antioxidant activity and muscle-damage biomarkers in trained (i.e. long-distance and middle-distance runners) and sedentary subjects. The study has been carried out on 8 middle-distance runners (MDR), 9 long-distance runners (LDR), and 8 sedentary subjects (SS). Each subject has undergone two exhaustive running tests, the first one is an incremental event (VAMEVAL test), the second one is a constant supra-maximal intensity test (limited-time test). Blood samples were collected at rest and immediately after each test. A significant increase in malondialdehyde (MDA) concentrations was observed in SS and MDR after the VAMEVAL test and in LDR after the Limited-Time test. A significant difference was also observed between LDR and the other two groups after the VAMEVAL test, and between LDR and MDR after the Limited-Time test. Significant modifications, notably, in myoglobin, CK, LDH, IL-6, TNF-α, and TAS were likewise noted but depending on the race-type and the sportive specialty. Maximal and supra-maximal races induce a significant increase in lipid peroxidation and cause non-negligible inflammation and muscle damage. These effects were relatively related to the physical exercise type and the sportive specialty.

  1. Maximal Abelian sets of roots

    CERN Document Server

    Lawther, R

    2018-01-01

    In this work the author lets \\Phi be an irreducible root system, with Coxeter group W. He considers subsets of \\Phi which are abelian, meaning that no two roots in the set have sum in \\Phi \\cup \\{ 0 \\}. He classifies all maximal abelian sets (i.e., abelian sets properly contained in no other) up to the action of W: for each W-orbit of maximal abelian sets we provide an explicit representative X, identify the (setwise) stabilizer W_X of X in W, and decompose X into W_X-orbits. Abelian sets of roots are closely related to abelian unipotent subgroups of simple algebraic groups, and thus to abelian p-subgroups of finite groups of Lie type over fields of characteristic p. Parts of the work presented here have been used to confirm the p-rank of E_8(p^n), and (somewhat unexpectedly) to obtain for the first time the 2-ranks of the Monster and Baby Monster sporadic groups, together with the double cover of the latter. Root systems of classical type are dealt with quickly here; the vast majority of the present work con...

  2. Maximizing Function through Intelligent Robot Actuator Control

    Data.gov (United States)

    National Aeronautics and Space Administration — Maximizing Function through Intelligent Robot Actuator Control Successful missions to Mars and beyond will only be possible with the support of high-performance...

  3. Meson exchange current corrections to magnetic moments in quantum hadro-dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Morse, T M; Price, C E; Shepard, J R [Colorado Univ., Boulder (USA). Dept. of Physics

    1990-11-15

    We have calculated pion exchange current corrections to the magnetic moments of closed shell {plus minus}1 particle nuclei near A=16 and 40 within the framework of quantum hadro-dynamics (QHD). We find that the correction is significant and that, in general, the agreement of the QHD isovector moments with experiment is worsened. Comparisons to previous non-relativistic calculations are also made. (orig.).

  4. A generalized complexity measure based on Rényi entropy

    Science.gov (United States)

    Sánchez-Moreno, Pablo; Angulo, Juan Carlos; Dehesa, Jesus S.

    2014-08-01

    The intrinsic statistical complexities of finite many-particle systems (i.e., those defined in terms of the single-particle density) quantify the degree of structure or patterns, far beyond the entropy measures. They are intuitively constructed to be minima at the opposite extremes of perfect order and maximal randomness. Starting from the pioneering LMC measure, which satisfies these requirements, some extensions of LMC-Rényi type have been published in the literature. The latter measures were shown to describe a variety of physical aspects of the internal disorder in atomic and molecular systems (e.g., quantum phase transitions, atomic shell filling) which are not grasped by their mother LMC quantity. However, they are not minimal for maximal randomness in general. In this communication, we propose a generalized LMC-Rényi complexity which overcomes this problem. Some applications which illustrate this fact are given.

  5. Efficient Color-Dressed Calculation of Virtual Corrections

    CERN Document Server

    Giele, Walter; Winter, Jan

    2010-01-01

    With the advent of generalized unitarity and parametric integration techniques, the construction of a generic Next-to-Leading Order Monte Carlo becomes feasible. Such a generator will entail the treatment of QCD color in the amplitudes. We extend the concept of color dressing to one-loop amplitudes, resulting in the formulation of an explicit algorithmic solution for the calculation of arbitrary scattering processes at Next-to-Leading order. The resulting algorithm is of exponential complexity, that is the numerical evaluation time of the virtual corrections grows by a constant multiplicative factor as the number of external partons is increased. To study the properties of the method, we calculate the virtual corrections to $n$-gluon scattering.

  6. Associations of maximal strength and muscular endurance test scores with cardiorespiratory fitness and body composition.

    Science.gov (United States)

    Vaara, Jani P; Kyröläinen, Heikki; Niemi, Jaakko; Ohrankämmen, Olli; Häkkinen, Arja; Kocay, Sheila; Häkkinen, Keijo

    2012-08-01

    The purpose of the present study was to assess the relationships between maximal strength and muscular endurance test scores additionally to previously widely studied measures of body composition and maximal aerobic capacity. 846 young men (25.5 ± 5.0 yrs) participated in the study. Maximal strength was measured using isometric bench press, leg extension and grip strength. Muscular endurance tests consisted of push-ups, sit-ups and repeated squats. An indirect graded cycle ergometer test was used to estimate maximal aerobic capacity (V(O2)max). Body composition was determined with bioelectrical impedance. Moreover, waist circumference (WC) and height were measured and body mass index (BMI) calculated. Maximal bench press was positively correlated with push-ups (r = 0.61, p strength (r = 0.34, p strength correlated positively (r = 0.36-0.44, p test scores were related to maximal aerobic capacity and body fat content, while fat free mass was associated with maximal strength test scores and thus is a major determinant for maximal strength. A contributive role of maximal strength to muscular endurance tests could be identified for the upper, but not the lower extremities. These findings suggest that push-up test is not only indicative of body fat content and maximal aerobic capacity but also maximal strength of upper body, whereas repeated squat test is mainly indicative of body fat content and maximal aerobic capacity, but not maximal strength of lower extremities.

  7. Corrections in the gold foil activation method for determination of neutron beam density

    DEFF Research Database (Denmark)

    Als-Nielsen, Jens Aage

    1967-01-01

    A finite foil thickness and deviation in the cross section from the 1ν law imply corrections in the determination of neutron beam densities by means of foil activation. These corrections, which depend on the neutron velocity distribution, have been examined in general and are given in a specific...

  8. Cycle length maximization in PWRs using empirical core models

    International Nuclear Information System (INIS)

    Okafor, K.C.; Aldemir, T.

    1987-01-01

    The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem

  9. Conformal blocks from Wilson lines with loop corrections

    Science.gov (United States)

    Hikida, Yasuaki; Uetoko, Takahiro

    2018-04-01

    We compute the conformal blocks of the Virasoro minimal model or its WN extension with large central charge from Wilson line networks in a Chern-Simons theory including loop corrections. In our previous work, we offered a prescription to regularize divergences from loops attached to Wilson lines. In this paper, we generalize our method with the prescription by dealing with more general operators for N =3 and apply it to the identity W3 block. We further compute general light-light blocks and heavy-light correlators for N =2 with the Wilson line method and compare the results with known ones obtained using a different prescription. We briefly discuss general W3 blocks.

  10. Ecological optimization for generalized irreversible Carnot refrigerators

    International Nuclear Information System (INIS)

    Chen Lingen; Zhu Xiaoqin; Sun Fengrui; Wu Chih

    2005-01-01

    The optimal ecological performance of a Newton's law generalized irreversible Carnot refrigerator with the losses of heat resistance, heat leakage and internal irreversibility is derived by taking an ecological optimization criterion as the objective, which consists of maximizing a function representing the best compromise between the exergy output rate and exergy loss rate (entropy production rate) of the refrigerator. Numerical examples are given to show the effects of heat leakage and internal irreversibility on the optimal performance of generalized irreversible refrigerators

  11. Strategy to maximize maintenance operation

    OpenAIRE

    Espinoza, Michael

    2005-01-01

    This project presents a strategic analysis to maximize maintenance operations in Alcan Kitimat Works in British Columbia. The project studies the role of maintenance in improving its overall maintenance performance. It provides strategic alternatives and specific recommendations addressing Kitimat Works key strategic issues and problems. A comprehensive industry and competitive analysis identifies the industry structure and its competitive forces. In the mature aluminium industry, the bargain...

  12. Maximizing the Range of a Projectile.

    Science.gov (United States)

    Brown, Ronald A.

    1992-01-01

    Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)

  13. Maximization of eigenvalues using topology optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2000-01-01

    to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...

  14. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Science.gov (United States)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  15. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Energy Technology Data Exchange (ETDEWEB)

    Faber, T L; Raghunath, N; Tudorascu, D; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: tfaber@emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  16. Attenuated Increase in Maximal Force of Rat Medial Gastrocnemius Muscle after Concurrent Peak Power and Endurance Training

    Directory of Open Access Journals (Sweden)

    Regula Furrer

    2013-01-01

    Full Text Available Improvement of muscle peak power and oxidative capacity are generally presumed to be mutually exclusive. However, this may not be valid by using fibre type-specific recruitment. Since rat medial gastrocnemius muscle (GM is composed of high and low oxidative compartments which are recruited task specifically, we hypothesised that the adaptive responses to peak power training were unaffected by additional endurance training. Thirty rats were subjected to either no training (control, peak power training (PT, or both peak power and endurance training (PET, which was performed on a treadmill 5 days per week for 6 weeks. Maximal running velocity increased 13.5% throughout the training and was similar in both training groups. Only after PT, GM maximal force was 10% higher than that of the control group. In the low oxidative compartment, mRNA levels of myostatin and MuRF-1 were higher after PT as compared to those of control and PET groups, respectively. Phospho-S6 ribosomal protein levels remained unchanged, suggesting that the elevated myostatin levels after PT did not inhibit mTOR signalling. In conclusion, even by using task-specific recruitment of the compartmentalized rat GM, additional endurance training interfered with the adaptive response of peak power training and attenuated the increase in maximal force after power training.

  17. Implementation of Cascade Gamma and Positron Range Corrections for I-124 Small Animal PET

    Science.gov (United States)

    Harzmann, S.; Braun, F.; Zakhnini, A.; Weber, W. A.; Pietrzyk, U.; Mix, M.

    2014-02-01

    Small animal Positron Emission Tomography (PET) should provide accurate quantification of regional radiotracer concentrations and high spatial resolution. This is challenging for non-pure positron emitters with high positron endpoint energies, such as I-124: On the one hand the cascade gammas emitted from this isotope can produce coincidence events with the 511 keV annihilation photons leading to quantification errors. On the other hand the long range of the high energy positron degrades spatial resolution. This paper presents the implementation of a comprehensive correction technique for both of these effects. The established corrections include a modified sinogram-based tail-fitting approach to correct for scatter, random and cascade gamma coincidences and a compensation for resolution degradation effects during the image reconstruction. Resolution losses were compensated for by an iterative algorithm which incorporates a convolution kernel derived from line source measurements for the microPET Focus 120 system. The entire processing chain for these corrections was implemented, whereas previous work has only addressed parts of this process. Monte Carlo simulations with GATE and measurements of mice with the microPET Focus 120 show that the proposed method reduces absolute quantification errors on average to 2.6% compared to 15.6% for the ordinary Maximum Likelihood Expectation Maximization algorithm. Furthermore resolution was improved in the order of 11-29% depending on the number of convolution iterations. In summary, a comprehensive, fast and robust algorithm for the correction of small animal PET studies with I-124 was developed which improves quantitative accuracy and spatial resolution.

  18. Maximal frustration as an immunological principle.

    Science.gov (United States)

    de Abreu, F Vistulo; Mostardinha, P

    2009-03-06

    A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.

  19. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    Directory of Open Access Journals (Sweden)

    Ahmed Elsaadany

    2014-01-01

    Full Text Available Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake and the second is devoted to drift correction (canard based-correction fuze. The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  20. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    Science.gov (United States)

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  1. Applications of maximally concentrating optics for solar energy collection

    Science.gov (United States)

    O'Gallagher, J.; Winston, R.

    1985-11-01

    A new family of optical concentrators based on a general nonimaging design principle for maximizing the geometric concentration, C, for radiation within a given acceptance half angle ±θα has been developed. The maximum limit exceeds by factors of 2 to 10 that attainable by systems using focusing optics. The wide acceptance angles permitted using these techniques have several unique advantages for solar concentrators including the elimination of the diurnal tracking requirement at intermediate concentrations (up to ˜10x), collection of circumsolar and some diffuse radiation, and relaxed tolerances. Because of these advantages, these types of concentrators have applications in solar energy wherever concentration is desired, e.g. for a wide variety of both thermal and photovoltaic uses. The basic principles of nonimaging optical design are reviewed. Selected configurations for thermal collector applications are discussed and the use of nonimaging elements as secondary concentrators is illustrated in the context of higher concentration applications.

  2. Social group utility maximization

    CERN Document Server

    Gong, Xiaowen; Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b

  3. Generalized Encoding CRDSA: Maximizing Throughput in Enhanced Random Access Schemes for Satellite

    Directory of Open Access Journals (Sweden)

    Manlio Bacco

    2014-12-01

    Full Text Available This work starts from the analysis of the literature about the Random Access protocols with contention resolution, such as Contention Resolution Diversity Slotted Aloha (CRDSA, and introduces a possible enhancement, named Generalized Encoding Contention Resolution Diversity Slotted Aloha (GE-CRDSA. The GE-CRDSA aims at improving the aggregated throughput when the system load is less than 50%, playing on the opportunity of transmitting an optimal combination of information and parity packets frame by frame. This paper shows the improvement in terms of throughput, by performing traffic estimation and adaptive choice of information and parity rates, when a satellite network undergoes a variable traffic load profile.

  4. Dynamical generation of maximally entangled states in two identical cavities

    International Nuclear Information System (INIS)

    Alexanian, Moorad

    2011-01-01

    The generation of entanglement between two identical coupled cavities, each containing a single three-level atom, is studied when the cavities exchange two coherent photons and are in the N=2,4 manifolds, where N represents the maximum number of photons possible in either cavity. The atom-photon state of each cavity is described by a qutrit for N=2 and a five-dimensional qudit for N=4. However, the conservation of the total value of N for the interacting two-cavity system limits the total number of states to only 4 states for N=2 and 8 states for N=4, rather than the usual 9 for two qutrits and 25 for two five-dimensional qudits. In the N=2 manifold, two-qutrit states dynamically generate four maximally entangled Bell states from initially unentangled states. In the N=4 manifold, two-qudit states dynamically generate maximally entangled states involving three or four states. The generation of these maximally entangled states occurs rather rapidly for large hopping strengths. The cavities function as a storage of periodically generated maximally entangled states.

  5. Gap processing for adaptive maximal poisson-disk sampling

    KAUST Repository

    Yan, Dongming; Wonka, Peter

    2013-01-01

    In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.

  6. Gap processing for adaptive maximal poisson-disk sampling

    KAUST Repository

    Yan, Dongming

    2013-10-17

    In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.

  7. Anatomy of maximal stop mixing in the MSSM

    International Nuclear Information System (INIS)

    Bruemmer, Felix; Kraml, Sabine; Kulkarni, Suchita

    2012-05-01

    A Standard Model-like Higgs near 125 GeV in the MSSM requires multi-TeV stop masses, or a near-maximal contribution to its mass from stop mixing. We investigate the maximal mixing scenario, and in particular its prospects for being realized it in potentially realistic GUT models. We work out constraints on the possible GUT-scale soft terms, which we compare with what can be obtained from some well-known mechanisms of SUSY breaking mediation. Finally, we analyze two promising scenarios in detail, namely gaugino mediation and gravity mediation with non-universal Higgs masses.

  8. Anatomy of maximal stop mixing in the MSSM

    Energy Technology Data Exchange (ETDEWEB)

    Bruemmer, Felix [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Kraml, Sabine; Kulkarni, Suchita [CNRS/IN2P3, INPG, Grenoble (France). Laboratoire de Physique Subatomique et de Cosmologie

    2012-05-15

    A Standard Model-like Higgs near 125 GeV in the MSSM requires multi-TeV stop masses, or a near-maximal contribution to its mass from stop mixing. We investigate the maximal mixing scenario, and in particular its prospects for being realized it in potentially realistic GUT models. We work out constraints on the possible GUT-scale soft terms, which we compare with what can be obtained from some well-known mechanisms of SUSY breaking mediation. Finally, we analyze two promising scenarios in detail, namely gaugino mediation and gravity mediation with non-universal Higgs masses.

  9. Nonlinear Impairment Compensation Using Expectation Maximization for PDM 16-QAM Systems

    DEFF Research Database (Denmark)

    Zibar, Darko; Winther, Ole; Franceschi, Niccolo

    2012-01-01

    We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth....

  10. Quantum corrections to Drell-Yan production of Z bosons

    Energy Technology Data Exchange (ETDEWEB)

    Shcherbakova, Elena S.

    2011-08-15

    In this thesis, we present higher-order corrections to inclusive Z-boson hadroproduction via the Drell-Yan mechanism, h{sub 1}+h{sub 2}{yields}Z+X, at large transverse momentum (q{sub T}). Specifically, we include the QED, QCD and electroweak corrections of orders O({alpha}{sub S}{alpha}, {alpha}{sub S}{sup 2}{alpha}, {alpha}{sub S}{alpha}{sup 2}). We work in the framework of the Standard Model and adopt the MS scheme of renormalization and factorization. The cross section of Z-boson production has been precisely measured at various hadron-hadron colliders, including the Tevatron and the LHC. Our calculations will help to calibrate and monitor the luminosity and to estimate of backgrounds of the hadron-hadron interactions more reliably. Besides the total cross section, we study the distributions in the transverse momentum and the rapidity (y) of the Z boson, appropriate for Tevatron and LHC experimental conditions. Investigating the relative sizes fo the various types of corrections by means of the factor K = {sigma}{sub tot} / {sigma}{sub Born}, we find that the QCS corrections of order {alpha}{sub S}{sup 2}{alpha} are largest in general and that the electroweak corrections of order {alpha}{sub S}{alpha}{sup 2} play an important role at large values of q{sub T}, while the QED corrections at the same order are small, of order 2% or below. We also compare out results with the existing literature. We correct a few misprints in the original calculation of the QCD corrections, and find the published electroweak correction to be incomplete. Our results for the QED corrections are new. (orig.)

  11. Taxi trips distribution modeling based on Entropy-Maximizing theory: A case study in Harbin city-China

    Science.gov (United States)

    Tang, Jinjun; Zhang, Shen; Chen, Xinqiang; Liu, Fang; Zou, Yajie

    2018-03-01

    Understanding Origin-Destination distribution of taxi trips is very important for improving effects of transportation planning and enhancing quality of taxi services. This study proposes a new method based on Entropy-Maximizing theory to model OD distribution in Harbin city using large-scale taxi GPS trajectories. Firstly, a K-means clustering method is utilized to partition raw pick-up and drop-off location into different zones, and trips are assumed to start from and end at zone centers. A generalized cost function is further defined by considering travel distance, time and fee between each OD pair. GPS data collected from more than 1000 taxis at an interval of 30 s during one month are divided into two parts: data from first twenty days is treated as training dataset and last ten days is taken as testing dataset. The training dataset is used to calibrate model while testing dataset is used to validate model. Furthermore, three indicators, mean absolute error (MAE), root mean square error (RMSE) and mean percentage absolute error (MPAE), are applied to evaluate training and testing performance of Entropy-Maximizing model versus Gravity model. The results demonstrate Entropy-Maximizing model is superior to Gravity model. Findings of the study are used to validate the feasibility of OD distribution from taxi GPS data in urban system.

  12. A New Look at the Impact of Maximizing on Unhappiness: Two Competing Mediating Effects

    Directory of Open Access Journals (Sweden)

    Jiaxi Peng

    2018-02-01

    Full Text Available The current study aims to explore how the decision-making style of maximizing affects subjective well-being (SWB, which mainly focuses on the confirmation of the mediator role of regret and suppressing role of achievement motivation. A total of 402 Chinese undergraduate students participated in this study, in which they responded to the maximization, regret, and achievement motivation scales and SWB measures. Results suggested that maximizing significantly predicted SWB. Moreover, regret and achievement motivation (hope for success dimension could completely mediate and suppress this effect. That is, two competing indirect pathways exist between maximizing and SWB. One pathway is through regret. Maximizing typically leads one to regret, which could negatively predict SWB. Alternatively, maximizing could lead to high levels of hope for success, which were positively correlated with SWB. Findings offered a complex method of thinking about the relationship between maximizing and SWB.

  13. Corrective Action Decision Document for Corrective Action Unit 568. Area 3 Plutonium Dispersion Sites, Nevada National Security Site, Nevada Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick [Nevada Field Ofice, Las Vegas, NV (United States). National Nuclear Security Administration

    2015-08-01

    The purpose of this Corrective Action Decision Document is to identify and provide the rationale for the recommendation of corrective action alternatives (CAAs) for the 14 CASs within CAU 568. Corrective action investigation (CAI) activities were performed from April 2014 through May 2015, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 568: Area 3 Plutonium Dispersion Sites, Nevada National Security Site, Nevada; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices. The purpose of the CAI was to fulfill data needs as defined during the DQO process. The CAU 568 dataset of investigation results was evaluated based on a data quality assessment. This assessment demonstrated that the dataset is complete and acceptable for use in fulfilling the DQO data needs. Based on the evaluation of analytical data from the CAI, review of future and current operations at the 14 CASs, and the detailed and comparative analysis of the potential CAAs, the following corrective actions are recommended for CAU 568: • No further action is the preferred corrective action for CASs 03-23-17, 03-23-22, 03-23-26. • Closure in place is the preferred corrective action for CAS 03-23-19; 03-45-01; the SE DCBs at CASs 03-23-20, 03-23-23, 03-23-31, 03-23-32, 03-23-33, and 03-23-34; and the Pascal-BHCA at CAS 03-23-31. • Clean closure is the preferred corrective action for CASs 03-08-04, 03-23-30, and 03-26-04; and the four well head covers at CASs 03-23-20, 03-23-23, 03-23-31, and 03-23-33.

  14. Analysis of dynamical corrections to baryon magnetic moments

    International Nuclear Information System (INIS)

    Ha, Phuoc; Durand, Loyal

    2003-01-01

    We present and analyze QCD corrections to the baryon magnetic moments in terms of the one-, two-, and three-body operators which appear in the effective field theory developed in our recent papers. The main corrections are extended Thomas-type corrections associated with the confining interactions in the baryon. We investigate the contributions of low-lying angular excitations to the baryon magnetic moments quantitatively and show that they are completely negligible. When the QCD corrections are combined with the nonquark model contributions of the meson loops, we obtain a model which describes the baryon magnetic moments within a mean deviation of 0.04 μ N . The nontrivial interplay of the two types of corrections to the quark-model magnetic moments is analyzed in detail, and explains why the quark model is so successful. In the course of these calculations, we parametrize the general spin structure of the j=(1/2) + baryon wave functions in a form which clearly displays the symmetry properties and the internal angular momentum content of the wave functions, and allows us to use spin-trace methods to calculate the many spin matrix elements which appear in the expressions for the baryon magnetic moments. This representation may be useful elsewhere

  15. Weighted divergence correction scheme and its fast implementation

    Science.gov (United States)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2017-05-01

    Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.

  16. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  17. Outer measures and weak type estimates of Hardy-Littlewood maximal operators

    Directory of Open Access Journals (Sweden)

    Terasawa Yutaka

    2006-01-01

    Full Text Available We will introduce the times modified centered and uncentered Hardy-Littlewood maximal operators on nonhomogeneous spaces for . We will prove that the times modified centered Hardy-Littlewood maximal operator is weak type bounded with constant when if the Radon measure of the space has "continuity" in some sense. In the proof, we will use the outer measure associated with the Radon measure. We will also prove other results of Hardy-Littlewood maximal operators on homogeneous spaces and on the real line by using outer measures.

  18. Reserve design to maximize species persistence

    Science.gov (United States)

    Robert G. Haight; Laurel E. Travis

    2008-01-01

    We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...

  19. Family of probability distributions derived from maximal entropy principle with scale invariant restrictions.

    Science.gov (United States)

    Sonnino, Giorgio; Steinbrecher, György; Cardinali, Alessandro; Sonnino, Alberto; Tlidi, Mustapha

    2013-01-01

    Using statistical thermodynamics, we derive a general expression of the stationary probability distribution for thermodynamic systems driven out of equilibrium by several thermodynamic forces. The local equilibrium is defined by imposing the minimum entropy production and the maximum entropy principle under the scale invariance restrictions. The obtained probability distribution presents a singularity that has immediate physical interpretation in terms of the intermittency models. The derived reference probability distribution function is interpreted as time and ensemble average of the real physical one. A generic family of stochastic processes describing noise-driven intermittency, where the stationary density distribution coincides exactly with the one resulted from entropy maximization, is presented.

  20. Postactivation potentiation biases maximal isometric strength assessment.

    Science.gov (United States)

    Lima, Leonardo Coelho Rabello; Oliveira, Felipe Bruno Dias; Oliveira, Thiago Pires; Assumpção, Claudio de Oliveira; Greco, Camila Coelho; Cardozo, Adalgiso Croscato; Denadai, Benedito Sérgio

    2014-01-01

    Postactivation potentiation (PAP) is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs). The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n = 23) performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT), time to achieve it (tPTI), contractile impulse (CI), root mean square of the electromyographic signal during PTI (RMS), and rate of torque development (RTD), in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m), RTD (746 ± 152 N·m·s(-1) versus 727 ± 158 N·m·s(-1)), and RMS (59.1 ± 12.2% RMSMAX  versus 54.8 ± 9.4% RMSMAX) were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms). We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.

  1. Determining spherical lens correction for astronaut training underwater.

    Science.gov (United States)

    Porter, Jason; Gibson, C Robert; Strauss, Samuel

    2011-09-01

    To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration astronauts while training underwater. The replica space suit's helmet contains curved visors that induce refractive power when submersed in water. Anterior surface powers and thicknesses were measured for the helmet's protective and inside visors. The impact of each visor on the helmet's refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet's total induced spherical power underwater and the astronaut's manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. The helmet's visors induced a total power of -2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (r = 0.971) with 70% of eyes having a difference in magnitude of astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater.

  2. Generalized Uncertainty Principle and Black Hole Entropy of Higher-Dimensional de Sitter Spacetime

    International Nuclear Information System (INIS)

    Zhao Haixia; Hu Shuangqi; Zhao Ren; Li Huaifan

    2007-01-01

    Recently, there has been much attention devoted to resolving the quantum corrections to the Bekenstein-Hawking black hole entropy. In particular, many researchers have expressed a vested interest in the coefficient of the logarithmic term of the black hole entropy correction term. In this paper, we calculate the correction value of the black hole entropy by utilizing the generalized uncertainty principle and obtain the correction term caused by the generalized uncertainty principle. Because in our calculation we think that the Bekenstein-Hawking area theorem is still valid after considering the generalized uncertainty principle, we derive that the coefficient of the logarithmic term of the black hole entropy correction term is positive. This result is different from the known result at present. Our method is valid not only for four-dimensional spacetimes but also for higher-dimensional spacetimes. In the whole process, the physics idea is clear and calculation is simple. It offers a new way for studying the entropy correction of the complicated spacetime.

  3. A new approach for beam hardening correction based on the local spectrum distributions

    International Nuclear Information System (INIS)

    Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza

    2015-01-01

    Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called “beam hardening”. The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile. - Highlights: • A novel Beam Hardening (BH) correction approach was described. • A new concept named Local Spectrum Distributions (LSDs) was used to BH

  4. A new approach for beam hardening correction based on the local spectrum distributions

    Energy Technology Data Exchange (ETDEWEB)

    Rasoulpour, Naser; Kamali-Asl, Alireza, E-mail: a_kamali@sbu.ac.ir; Hemmati, Hamidreza

    2015-09-11

    Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called “beam hardening”. The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile. - Highlights: • A novel Beam Hardening (BH) correction approach was described. • A new concept named Local Spectrum Distributions (LSDs) was used to BH

  5. A Maximum-Likelihood Method to Correct for Allelic Dropout in Microsatellite Data with No Replicate Genotypes

    Science.gov (United States)

    Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.

    2012-01-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets

  6. Salinity effect on the maximal growth temperature of some bacteria isolated from marine enviroments.

    Science.gov (United States)

    Stanley, S O; Morita, R Y

    1968-01-01

    Salinity of the growth medium was found to have a marked effect on the maximal growth temperature of four bacteria isolated from marine sources. Vibrio marinus MP-1 had a maximal growth temperature of 21.2 C at a salinity of 35% and a maximal growth temperature of 10.5 C at a salinity of 7%, the lowest salinity at which it would grow. This effect was shown to be due to the presence of various cations in the medium. The order of effectiveness of cations in restoring the normal maximal growth temperature, when added to dilute seawater, was Na(+) > Li(+) > Mg(++) > K(+) > Rb(+) > NH(4) (+). The anions tested, with the exception of SO(4)=, had no marked effect on the maximal growth temperature response. In a completely defined medium, the highest maximal growth temperature was 20.0 C at 0.40 m NaCl. A decrease in the maximal growth temperature was observed at both low and high concentrations of NaCl.

  7. Higher dimensional maximally symmetric stationary manifold with pure gauge condition and codimension one flat submanifold

    International Nuclear Information System (INIS)

    Wiliardy, Abednego; Gunara, Bobby Eka

    2016-01-01

    An n dimensional flat manifold N is embedded into an n +1 dimensional stationary manifold M. The metric of M is derived from a general form of stationary manifold. By taking several assumption, such as 1) the ambient manifold M to be maximally symmetric space and satisfying a pure gauge condition, and 2) the submanifold is taken to be flat, then we find the solution that satisfies Ricci scalar of N . Moreover, we determine whether the solution is compatible with the Ricci and Riemann tensor of manifold N depending on the dimension. (paper)

  8. Testing and inference in nonlinear cointegrating vector error correction models

    DEFF Research Database (Denmark)

    Kristensen, D.; Rahbek, A.

    2013-01-01

    We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...... asymptotic theory for estimators and test statistics. The derived asymptotic results prove to be nonstandard compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. This complicates implementation of tests motivating the introduction of bootstrap...

  9. A THEORY OF MAXIMIZING SENSORY INFORMATION

    NARCIS (Netherlands)

    Hateren, J.H. van

    1992-01-01

    A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power

  10. Ehrenfest's Lottery--Time and Entropy Maximization

    Science.gov (United States)

    Ashbaugh, Henry S.

    2010-01-01

    Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…

  11. A Model of College Tuition Maximization

    Science.gov (United States)

    Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.

    2009-01-01

    This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…

  12. Relativistic corrections for the conventional, classical Nyquist theorem

    International Nuclear Information System (INIS)

    Theimer, O.; Dirk, E.H.

    1983-01-01

    New expressions for the Nyquist theorem are derived under the condition in which the random thermal speed of electrons, in a system of charged particles, can approach the speed of light. Both the case in which, the electron have not drift velocity relative to the ions or neutral particles and the case in which drift occours are investigated. In both instances, the new expressions for the Nyquist theorem are found to contain relativistic correction terms; however for electron temperatures T approx. 10 9 K and drift velocity magnitudes w approx. 0.5c, where c is the speed of light, the effects of these correction terms are generally small. The derivation of these relativistic corrections is carried out by means of procedures developed in an earlier work. A relativistic distribution function, which incorporates a constant drift velocity with a random thermal velocity for a given particle species, is developed

  13. On Line-Elements and Radii: A Correction

    Directory of Open Access Journals (Sweden)

    Crothers S. J.

    2007-04-01

    Full Text Available Using a manifold with boundary various line-elements have been proposed as solutions to Einstein’s gravitational field. It is from such line-elements that black holes, expansion of the Universe, and big bang cosmology have been alleged. However, it has been proved that black holes, expansion of the Universe, and big bang cosmology are not consistent with General Relativity. In a previous paper disproving the black hole theory, the writer made an error which, although minor and having no effect on the conclusion that black holes are inconsistent with General Relativity, is corrected herein for the record.

  14. Complete one-loop electroweak corrections to ZZZ production at the ILC

    International Nuclear Information System (INIS)

    Su Jijuan; Ma Wengan; Zhang Renyou; Wang Shaoming; Guo Lei

    2008-01-01

    We study the complete O(α ew ) electroweak (EW) corrections to the production of three Z 0 bosons in the framework of the standard model (SM) at the ILC. The leading-order and the EW next-to-leading-order corrected cross sections are presented, and their dependence on the colliding energy √(s) and Higgs-boson mass m H is analyzed. We investigate also the LO and one-loop EW corrected distributions of the transverse momentum of the final Z 0 boson, and the invariant mass of the Z 0 Z 0 pair. Our numerical results show that the EW one-loop correction generally suppresses the tree-level cross section, and the relative correction with m H =120 GeV(150 GeV) varies between -15.8%(-13.9%) and -7.5%(-6.2%) when √(s) goes up from 350 GeV to 1 TeV.

  15. Wall correction model for wind tunnels with open test section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming

    2006-01-01

    In the paper we present a correction model for wall interference on rotors of wind turbines or propellers in wind tunnels. The model, which is based on a one-dimensional momentum approach, is validated against results from CFD computations using a generalized actuator disc principle. In the model...... good agreement with the CFD computations, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections....

  16. Output power maximization of low-power wind energy conversion systems revisited: Possible control solutions

    Energy Technology Data Exchange (ETDEWEB)

    Vlad, Ciprian; Munteanu, Iulian; Bratcu, Antoneta Iuliana; Ceanga, Emil [' ' Dunarea de Jos' ' University of Galati, 47, Domneasca, 800008-Galati (Romania)

    2010-02-15

    This paper discusses the problem of output power maximization for low-power wind energy conversion systems operated in partial load. These systems are generally based on multi-polar permanent-magnet synchronous generators, who exhibit significant efficiency variations over the operating range. Unlike the high-power systems, whose mechanical-to-electrical conversion efficiency is high and practically does not modify the global optimum, the low-power systems global conversion efficiency is affected by the generator behavior and the electrical power optimization is no longer equivalent with the mechanical power optimization. The system efficiency has been analyzed by using both the maxima locus of the mechanical power versus the rotational speed characteristics, and the maxima locus of the electrical power delivered versus the rotational speed characteristics. The experimental investigation has been carried out by using a torque-controlled generator taken from a real-world wind turbine coupled to a physically simulated wind turbine rotor. The experimental results indeed show that the steady-state performance of the conversion system is strongly determined by the generator behavior. Some control solutions aiming at maximizing the energy efficiency are envisaged and thoroughly compared through experimental results. (author)

  17. Output power maximization of low-power wind energy conversion systems revisited: Possible control solutions

    International Nuclear Information System (INIS)

    Vlad, Ciprian; Munteanu, Iulian; Bratcu, Antoneta Iuliana; Ceanga, Emil

    2010-01-01

    This paper discusses the problem of output power maximization for low-power wind energy conversion systems operated in partial load. These systems are generally based on multi-polar permanent-magnet synchronous generators, who exhibit significant efficiency variations over the operating range. Unlike the high-power systems, whose mechanical-to-electrical conversion efficiency is high and practically does not modify the global optimum, the low-power systems global conversion efficiency is affected by the generator behavior and the electrical power optimization is no longer equivalent with the mechanical power optimization. The system efficiency has been analyzed by using both the maxima locus of the mechanical power versus the rotational speed characteristics, and the maxima locus of the electrical power delivered versus the rotational speed characteristics. The experimental investigation has been carried out by using a torque-controlled generator taken from a real-world wind turbine coupled to a physically simulated wind turbine rotor. The experimental results indeed show that the steady-state performance of the conversion system is strongly determined by the generator behavior. Some control solutions aiming at maximizing the energy efficiency are envisaged and thoroughly compared through experimental results.

  18. Evaluation of the clinical maxim: "If it ain't broke, don't fix it".

    Science.gov (United States)

    Howell-Duffy, Chris; Hrynchak, Patricia K; Irving, Elizabeth L; Mouat, Graham S V; Elliott, David B

    2012-01-01

    A significant number of patients return to optometric practice dissatisfied with their spectacles. An important question is whether any of these cases are preventable. There are several different clinical maxims that are used to modify the subjective refraction when determining the refractive prescription. These maxims aim to improve patient comfort and adaptation and thereby reduce patient dissatisfaction with new spectacles. They are not based on research evidence, but rather on expert opinion gained from clinical experience. The aim of this study was to retrospectively analyze a large number of case records of dissatisfied patients to assess the possible usefulness of the prescribing maxim "if it ain't broke, don't fix it." Three hundred eighteen non-tolerance cases from a university-based Canadian optometric clinic were categorized by a focus group of optometrists. Three prescribing categories were defined and comprised cases in which application of the proposed maxim may have prevented the recheck eye examination; a more limited application of the maxim for one working distance may have been appropriate; and finally scenarios in which the maxim did not work in that the practitioner was judged to have initially followed the maxim, yet patient dissatisfaction was still reported. The remaining unallocated records comprised prescribing situations outside the scope of this study. Approximately 32% of non-tolerance cases were judged to have been preventable by use of the proposed maxim. Furthermore, an additional 10% reduction in recheck cases may have been possible by a more liberal interpretation of the maxim. Conversely, 4% of cases were deemed to comprise scenarios in which the maxim was followed yet the patient returned later to report problems with their spectacles. The prescribing maxim "if it ain't broke, don't fix it" appears to have a role in reducing recheck eye examinations and improving patient satisfaction with new spectacles.

  19. A fractional optimal control problem for maximizing advertising efficiency

    OpenAIRE

    Igor Bykadorov; Andrea Ellero; Stefania Funari; Elena Moretti

    2007-01-01

    We propose an optimal control problem to model the dynamics of the communication activity of a firm with the aim of maximizing its efficiency. We assume that the advertising effort undertaken by the firm contributes to increase the firm's goodwill and that the goodwill affects the firm's sales. The aim is to find the advertising policies in order to maximize the firm's efficiency index which is computed as the ratio between "outputs" and "inputs" properly weighted; the outputs are represented...

  20. Flux compactifications and generalized geometries

    International Nuclear Information System (INIS)

    Grana, Mariana

    2006-01-01

    Following the lectures given at CERN Winter School 2006, we present a pedagogical overview of flux compactifications and generalized geometries, concentrating on closed string fluxes in type II theories. We start by reviewing the supersymmetric flux configurations with maximally symmetric four-dimensional spaces. We then discuss the no-go theorems (and their evasion) for compactifications with fluxes. We analyse the resulting four-dimensional effective theories for Calabi-Yau and Calabi-Yau orientifold compactifications, concentrating on the flux-induced superpotentials. We discuss the generic mechanism of moduli stabilization and illustrate with two examples: the conifold in IIB and a T 6 /(Z 3 x Z 3 ) torus in IIA. We finish by studying the effective action and flux vacua for generalized geometries in the context of generalized complex geometry