WorldWideScience

Sample records for minimum theorem based

  1. A singularity theorem based on spatial averages

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  2. A Coordinate-Based Proof of the Scallop Theorem

    Ishimoto, Kenta; Yamada, Michio

    2012-01-01

    We reconsider fluid dynamics for a self-propulsive swimmer in Stokes flow. With an exact definition of deformation of a swimmer, a coordinate-based proof is first given to Purcell's scallop theorem including the body rotation.

  3. A proof of the theorem regarding the distribution of lift over the span for minimum induced drag

    Durand, W F

    1931-01-01

    The proof of the theorem that the elliptical distribution of lift over the span is that which will give rise to the minimum induced drag has been given in a variety of ways, generally speaking too difficult to be readily followed by the graduate of the average good technical school of the present day. In the form of proof this report makes an effort to bring the matter more readily within the grasp of this class of readers.

  4. Fourier diffraction theorem for diffusion-based thermal tomography

    Baddour, Natalie

    2006-01-01

    There has been much recent interest in thermal imaging as a method of non-destructive testing and for non-invasive medical imaging. The basic idea of applying heat or cold to an area and observing the resulting temperature change with an infrared camera has led to the development of rapid and relatively inexpensive inspection systems. However, the main drawback to date has been that such an approach provides mainly qualitative results. In order to advance the quantitative results that are possible via thermal imaging, there is interest in applying techniques and algorithms from conventional tomography. Many tomography algorithms are based on the Fourier diffraction theorem, which is inapplicable to thermal imaging without suitable modification to account for the attenuative nature of thermal waves. In this paper, the Fourier diffraction theorem for thermal tomography is derived and discussed. The intent is for this thermal-diffusion based Fourier diffraction theorem to form the basis of tomographic reconstruction algorithms for quantitative thermal imaging

  5. Geometry of the Adiabatic Theorem

    Lobo, Augusto Cesar; Ribeiro, Rafael Antunes; Ribeiro, Clyffe de Assis; Dieguez, Pedro Ruas

    2012-01-01

    We present a simple and pedagogical derivation of the quantum adiabatic theorem for two-level systems (a single qubit) based on geometrical structures of quantum mechanics developed by Anandan and Aharonov, among others. We have chosen to use only the minimum geometric structure needed for the understanding of the adiabatic theorem for this case.…

  6. Modern thermodynamics. Based on the extended Carnot theorem

    Wang, Jitao [Fudan Univ., Shanghai (China). Microelectronics Dept.

    2011-07-01

    ''Modern Thermodynamics- Based on the Extended Carnot Theorem'' provides comprehensive definitions and mathematical expressions of both classical and modern thermodynamics. The goal is to develop the fundamental theory on an extended Carnot theorem without incorporating any extraneous assumptions. In particular, it offers a fundamental thermodynamic and calculational methodology for the synthesis of low-pressure diamonds. It also discusses many ''abnormal phenomena'', such as spiral reactions, cyclic reactions, chemical oscillations, low-pressure carat-size diamond growth, biological systems, and more. The book is intended for chemists and physicists working in thermodynamics, chemical thermodynamics, phase diagrams, biochemistry and complex systems, as well as graduate students in these fields. Jitao Wang is a professor emeritus at Fudan University, Shanghai, China. (orig.)

  7. Modern Thermodynamics Based on the Extended Carnot Theorem

    Wang, Jitao

    2012-01-01

    "Modern Thermodynamics- Based on the Extended Carnot Theorem" provides comprehensive definitions and mathematical expressions of both classical and modern thermodynamics. The goal is to develop the fundamental theory on an extended Carnot theorem without incorporating any extraneous assumptions. In particular, it offers a fundamental thermodynamic and calculational methodology for the synthesis of low-pressure diamonds. It also discusses many "abnormal phenomena", such as spiral reactions, cyclic reactions, chemical oscillations, low-pressure carat-size diamond growth, biological systems, and more. The book is intended for chemists and physicists working in thermodynamics, chemical thermodynamics, phase diagrams, biochemistry and complex systems, as well as graduate students in these fields. Jitao Wang is a professor emeritus at Fudan University, Shanghai, China.

  8. Learning in neural networks based on a generalized fluctuation theorem

    Hayakawa, Takashi; Aoyagi, Toshio

    2015-11-01

    Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.

  9. Projection-slice theorem based 2D-3D registration

    van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.

    2007-03-01

    In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.

  10. Minimum Entropy Generation Theorem Investigation and Optimization of Metal Hydride Alloy Hydrogen Storage

    Chi-Chang Wang

    2014-05-01

    Full Text Available The main purpose of this paper is to carry out numerical simulation of the hydrogen storage on exothermic reaction of metal hydride LaNi5 alloy container. In addition to accelerating the reaction speed of the internal metal hydride by internal control tube water-cooled mode, analyze via the application of second law of thermodynamics the principle of entropy generation. Use COMSOL Mutilphysics 4.3 a to engage in finite element method value simulation on two-dimensional axisymmetric model. Also on the premise that the internal control tube parameters the radius ri, the flow rate U meet the metal hydride saturation time, observe the reaction process of two parameters on the tank, entropy distribution and the results of the accumulated entropy. And try to find the internal tube parameter values of the minimum entropy, whose purpose is to be able to identify the reaction process and the reaction results of internal tank’s optimum energy conservation.

  11. Jet identification based on probability calculations using Bayes' theorem

    Jacobsson, C.; Joensson, L.; Lindgren, G.; Nyberg-Werther, M.

    1994-11-01

    The problem of identifying jets at LEP and HERA has been studied. Identification using jet energies and fragmentation properties was treated separately in order to investigate the degree of quark-gluon separation that can be achieved by either of these approaches. In the case of the fragmentation-based identification, a neural network was used, and a test of the dependence on the jet production process and the fragmentation model was done. Instead of working with the separation variables directly, these have been used to calculate probabilities of having a specific type of jet, according to Bayes' theorem. This offers a direct interpretation of the performance of the jet identification and provides a simple means of combining the results of the energy- and fragmentation-based identifications. (orig.)

  12. The quantitative Morse theorem

    Loi, Ta Le; Phien, Phan

    2013-01-01

    In this paper, we give a proof of the quantitative Morse theorem stated by {Y. Yomdin} in \\cite{Y1}. The proof is based on the quantitative Sard theorem, the quantitative inverse function theorem and the quantitative Morse lemma.

  13. Bell's theorem based on a generalized EPR criterion of reality

    Eberhard, P.H.; Rosselet, P.

    1995-01-01

    First, the demonstration of Bell's theorem, i.e., of the nonlocal character of quantum theory, is spelled out using the EPR criterion of reality as premises and a gedanken experiment involving two particles. Then, the EPR criterion is extended to include quantities predicted almost with certainty, and Bell's theorem is demonstrated on these new premises. The same experiment is used but in conditions that become possible in real life, without the requirements of ideal efficiencies and zero background. Very high efficiencies and low background are needed, but these requirements may be met in the future

  14. Bell's theorem based on a generalized EPR criterion of reality

    Eberhard, P.H.; Rosselet, P.

    1993-04-01

    First, the demonstration of Bell's theorem, i.e. of the non-local character of quantum theory, is spelled out using the EPR criterion of reality as premises and a gedanken experiment involving two particles. Then, the EPR criterion is extended to include quantities predicted almost with certainty, and Bell's theorem is demonstrated on these new premises. The same experiment is used but in conditions that become possible in real life, without the requirements of ideal efficiencies and zero background. Very high efficiencies and low background are needed, but these requirements may be met in the future. (author) 1 fig., 11 refs

  15. Proofs of the Kochen–Specker theorem based on a system of three qubits

    Waegell, Mordecai; Aravind, P K

    2012-01-01

    A number of new proofs of the Kochen–Specker theorem are given based on the observables of the three-qubit Pauli group. Each proof is presented in the form of a diagram from which it is obvious by inspection. Each of our observable-based proofs leads to a system of projectors and bases that generally yields a large number of ‘parity proofs’ of the Kochen–Specker theorem. Some examples of such proofs are given and some of their applications are discussed. (paper)

  16. A Gleason-Type Theorem for Any Dimension Based on a Gambling Formulation of Quantum Mechanics

    Benavoli, Alessio; Facchini, Alessandro; Zaffalon, Marco

    2017-07-01

    Based on a gambling formulation of quantum mechanics, we derive a Gleason-type theorem that holds for any dimension n of a quantum system, and in particular for n=2. The theorem states that the only logically consistent probability assignments are exactly the ones that are definable as the trace of the product of a projector and a density matrix operator. In addition, we detail the reason why dispersion-free probabilities are actually not valid, or rational, probabilities for quantum mechanics, and hence should be excluded from consideration.

  17. Algorithm/Architecture Co-design of the Generalized Sampling Theorem Based De-Interlacer.

    Beric, A.; Haan, de G.; Sethuraman, R.; Meerbergen, van J.

    2005-01-01

    De-interlacing is a major determinant of image quality in a modern display processing chain. The de-interlacing method based on the generalized sampling theorem (GST)applied to motion estimation and motion compensation provides the best de-interlacing results. With HDTV interlaced input material

  18. Noether's Theorem and its Inverse of Birkhoffian System in Event Space Based on Herglotz Variational Problem

    Tian, X.; Zhang, Y.

    2018-03-01

    Herglotz variational principle, in which the functional is defined by a differential equation, generalizes the classical ones defining the functional by an integral. The principle gives a variational principle description of nonconservative systems even when the Lagrangian is independent of time. This paper focuses on studying the Noether's theorem and its inverse of a Birkhoffian system in event space based on the Herglotz variational problem. Firstly, according to the Herglotz variational principle of a Birkhoffian system, the principle of a Birkhoffian system in event space is established. Secondly, its parametric equations and two basic formulae for the variation of Pfaff-Herglotz action of a Birkhoffian system in event space are obtained. Furthermore, the definition and criteria of Noether symmetry of the Birkhoffian system in event space based on the Herglotz variational problem are given. Then, according to the relationship between the Noether symmetry and conserved quantity, the Noether's theorem is derived. Under classical conditions, Noether's theorem of a Birkhoffian system in event space based on the Herglotz variational problem reduces to the classical ones. In addition, Noether's inverse theorem of the Birkhoffian system in event space based on the Herglotz variational problem is also obtained. In the end of the paper, an example is given to illustrate the application of the results.

  19. Reinforcement Learning Based on the Bayesian Theorem for Electricity Markets Decision Support

    Sousa, Tiago; Pinto, Tiago; Praca, Isabel

    2014-01-01

    This paper presents the applicability of a reinforcement learning algorithm based on the application of the Bayesian theorem of probability. The proposed reinforcement learning algorithm is an advantageous and indispensable tool for ALBidS (Adaptive Learning strategic Bidding System), a multi...

  20. Deterministic and efficient quantum cryptography based on Bell's theorem

    Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg

    2006-01-01

    We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology

  1. Deterministic and efficient quantum cryptography based on Bell's theorem

    Chen, Z.-B.; Zhang, Q.; Bao, X.-H.; Schmiedmayer, J.; Pan, J.-W.

    2005-01-01

    Full text: We propose a novel double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish a key bit with the help of classical communications. Eavesdropping can be detected by checking the violation of local realism for the detected events. We also show that our protocol allows a robust implementation under current technology. (author)

  2. Birkhoff’s theorem in Lovelock gravity for general base manifolds

    Ray, Sourya

    2015-10-01

    We extend the Birkhoff’s theorem in Lovelock gravity for arbitrary base manifolds using an elementary method. In particular, it is shown that any solution of the form of a warped product of a two-dimensional transverse space and an arbitrary base manifold must be static. Moreover, the field equations restrict the base manifold such that all the non-trivial intrinsic Lovelock tensors of the base manifold are constants, which can be chosen arbitrarily, and the metric in the transverse space is determined by a single function of a spacelike coordinate which satisfies an algebraic equation involving the constants characterizing the base manifold along with the coupling constants.

  3. Poncelet's theorem

    Flatto, Leopold

    2009-01-01

    Poncelet's theorem is a famous result in algebraic geometry, dating to the early part of the nineteenth century. It concerns closed polygons inscribed in one conic and circumscribed about another. The theorem is of great depth in that it relates to a large and diverse body of mathematics. There are several proofs of the theorem, none of which is elementary. A particularly attractive feature of the theorem, which is easily understood but difficult to prove, is that it serves as a prism through which one can learn and appreciate a lot of beautiful mathematics. This book stresses the modern appro

  4. Developing a new solar radiation estimation model based on Buckingham theorem

    Ekici, Can; Teke, Ismail

    2018-06-01

    While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.

  5. Analysis of swarm behaviors based on an inversion of the fluctuation theorem.

    Hamann, Heiko; Schmickl, Thomas; Crailsheim, Karl

    2014-01-01

    A grand challenge in the field of artificial life is to find a general theory of emergent self-organizing systems. In swarm systems most of the observed complexity is based on motion of simple entities. Similarly, statistical mechanics focuses on collective properties induced by the motion of many interacting particles. In this article we apply methods from statistical mechanics to swarm systems. We try to explain the emergent behavior of a simulated swarm by applying methods based on the fluctuation theorem. Empirical results indicate that swarms are able to produce negative entropy within an isolated subsystem due to frozen accidents. Individuals of a swarm are able to locally detect fluctuations of the global entropy measure and store them, if they are negative entropy productions. By accumulating these stored fluctuations over time the swarm as a whole is producing negative entropy and the system ends up in an ordered state. We claim that this indicates the existence of an inverted fluctuation theorem for emergent self-organizing dissipative systems. This approach bears the potential of general applicability.

  6. Parity proofs of the Kochen–Specker theorem based on the Lie algebra E8

    Waegell, Mordecai; Aravind, P K

    2015-01-01

    The 240 root vectors of the Lie algebra E8 lead to a system of 120 rays in a real eight-dimensional Hilbert space that contains a large number of parity proofs of the Kochen–Specker (KS) theorem. After introducing the rays in a triacontagonal representation due to Coxeter, we present their KS diagram in the form of a ‘basis table’ showing all 2025 bases (i.e., sets of eight mutually orthogonal rays) formed by the rays. Only a few of the bases are actually listed, but simple rules are given, based on the symmetries of E8, for obtaining all the other bases from the ones shown. The basis table is an object of great interest because all the parity proofs of E8 can be exhibited as subsets of it. We show how the triacontagonal representation of E8 facilitates the identification of substructures that are more easily searched for their parity proofs. We have found hundreds of different types of parity proofs, ranging from 9 bases (or contexts) at the low end to 35 bases at the high end, and involving projectors of various ranks and multiplicities. After giving an overview of the proofs we found, we present a few concrete examples of the proofs that illustrate both their generic features as well as some of their more unusual properties. In particular, we present a proof involving 34 rays and 9 bases that appears to provide the most compact proof of the KS theorem found to date in eight-dimensions. (paper)

  7. Frege's theorem

    Heck, Richard G

    2011-01-01

    Frege's Theorem collects eleven essays by Richard G Heck, Jr, one of the world's leading authorities on Frege's philosophy. The Theorem is the central contribution of Gottlob Frege's formal work on arithmetic. It tells us that the axioms of arithmetic can be derived, purely logically, from a single principle: the number of these things is the same as the number of those things just in case these can be matched up one-to-one with those. But that principle seems so utterlyfundamental to thought about number that it might almost count as a definition of number. If so, Frege's Theorem shows that a

  8. A Bidirectional Generalized Synchronization Theorem-Based Chaotic Pseudo-random Number Generator

    Han Shuangshuang

    2013-07-01

    Full Text Available Based on a bidirectional generalized synchronization theorem for discrete chaos system, this paper introduces a new 5-dimensional bidirectional generalized chaos synchronization system (BGCSDS, whose prototype is a novel chaotic system introduced in [12]. Numerical simulation showed that two pair variables of the BGCSDS achieve generalized chaos synchronization via a transform H.A chaos-based pseudo-random number generator (CPNG was designed by the new BGCSDS. Using the FIPS-140-2 tests issued by the National Institute of Standard and Technology (NIST verified the randomness of the 1000 binary number sequences generated via the CPNG and the RC4 algorithm respectively. The results showed that all the tested sequences passed the FIPS-140-2 tests. The confidence interval analysis showed the statistical properties of the randomness of the sequences generated via the CPNG and the RC4 algorithm do not have significant differences.

  9. Construction of Quasi-Cyclic LDPC Codes Based on Fundamental Theorem of Arithmetic

    Hai Zhu

    2018-01-01

    Full Text Available Quasi-cyclic (QC LDPC codes play an important role in 5G communications and have been chosen as the standard codes for 5G enhanced mobile broadband (eMBB data channel. In this paper, we study the construction of QC LDPC codes based on an arbitrary given expansion factor (or lifting degree. First, we analyze the cycle structure of QC LDPC codes and give the necessary and sufficient condition for the existence of short cycles. Based on the fundamental theorem of arithmetic in number theory, we divide the integer factorization into three cases and present three classes of QC LDPC codes accordingly. Furthermore, a general construction method of QC LDPC codes with girth of at least 6 is proposed. Numerical results show that the constructed QC LDPC codes perform well over the AWGN channel when decoded with the iterative algorithms.

  10. Development and application of 3-D fractal reservoir model based on collage theorem

    Kim, I.K.; Kim, K.S.; Sung, W.M. [Hanyang Univ., Seoul (Korea, Republic of)

    1995-04-30

    Reservoir characterization is the essential process to accurately evaluate the reservoir and has been conducted by geostatistical method, SRA algorithm, and etc. The characterized distribution of heterogeneous property by these methods shows randomly distributed phenomena, and does not present anomalous shape of property variation at discontinued space as compared with the observed shape in nature. This study proposed a new algorithm of fractal concept based on collage theorem, which can virtually present not only geometric shape of irregular and anomalous pore structures or coastlines, but also property variation for discontinuously observed data. With a basis of fractal concept, three dimensional fractal reservoir model was developed to more accurately characterize the heterogeneous reservoir. We performed analysis of pre-predictable hypothetically observed permeability data by using the fractal reservoir model. From the results, we can recognize that permeability distributions in the areal view or the cross-sectional view were consistent with the observed data. (author). 8 refs., 1 tab., 6 figs.

  11. The spectral theorem for quaternionic unbounded normal operators based on the S-spectrum

    Alpay, Daniel, E-mail: dany@math.bgu.ac.il; Kimsey, David P., E-mail: dpkimsey@gmail.com [Department of Mathematics, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Colombo, Fabrizio, E-mail: fabrizio.colombo@polimi.it [Politecnico di Milano, Dipartimento di Matematica, Via E. Bonardi, 9, 20133 Milano (Italy)

    2016-02-15

    In this paper we prove the spectral theorem for quaternionic unbounded normal operators using the notion of S-spectrum. The proof technique consists of first establishing a spectral theorem for quaternionic bounded normal operators and then using a transformation which maps a quaternionic unbounded normal operator to a quaternionic bounded normal operator. With this paper we complete the foundation of spectral analysis of quaternionic operators. The S-spectrum has been introduced to define the quaternionic functional calculus but it turns out to be the correct object also for the spectral theorem for quaternionic normal operators. The lack of a suitable notion of spectrum was a major obstruction to fully understand the spectral theorem for quaternionic normal operators. A prime motivation for studying the spectral theorem for quaternionic unbounded normal operators is given by the subclass of unbounded anti-self adjoint quaternionic operators which play a crucial role in the quaternionic quantum mechanics.

  12. Generation Method of Multipiecewise Linear Chaotic Systems Based on the Heteroclinic Shil’nikov Theorem and Switching Control

    Chunyan Han

    2015-01-01

    Full Text Available Based on the heteroclinic Shil’nikov theorem and switching control, a kind of multipiecewise linear chaotic system is constructed in this paper. Firstly, two fundamental linear systems are constructed via linearization of a chaotic system at its two equilibrium points. Secondly, a two-piecewise linear chaotic system which satisfies the Shil’nikov theorem is generated by constructing heteroclinic loop between equilibrium points of the two fundamental systems by switching control. Finally, another multipiecewise linear chaotic system that also satisfies the Shil’nikov theorem is obtained via alternate translation of the two fundamental linear systems and heteroclinic loop construction of adjacent equilibria for the multipiecewise linear system. Some basic dynamical characteristics, including divergence, Lyapunov exponents, and bifurcation diagrams of the constructed systems, are analyzed. Meanwhile, computer simulation and circuit design are used for the proposed chaotic systems, and they are demonstrated to be effective for the method of chaos anticontrol.

  13. Heart rate-based lactate minimum test: a reproducible method.

    Strupler, M.; Muller, G.; Perret, C.

    2009-01-01

    OBJECTIVE: To find the individual intensity for aerobic endurance training, the lactate minimum test (LMT) seems to be a promising method. LMTs described in the literature consist of speed or work rate-based protocols, but for training prescription in daily practice mostly heart rate is used. The

  14. Nursing Minimum Data Set Based on EHR Archetypes Approach.

    Spigolon, Dandara N; Moro, Cláudia M C

    2012-01-01

    The establishment of a Nursing Minimum Data Set (NMDS) can facilitate the use of health information systems. The adoption of these sets and represent them based on archetypes are a way of developing and support health systems. The objective of this paper is to describe the definition of a minimum data set for nursing in endometriosis represent with archetypes. The study was divided into two steps: Defining the Nursing Minimum Data Set to endometriosis, and Development archetypes related to the NMDS. The nursing data set to endometriosis was represented in the form of archetype, using the whole perception of the evaluation item, organs and senses. This form of representation is an important tool for semantic interoperability and knowledge representation for health information systems.

  15. Pythagoras theorem

    Debattista, Josephine

    2000-01-01

    Pythagoras 580 BC was a Greek mathematician who became famous for formulating Pythagoras Theorem but its principles were known earlier. The ancient Egyptians wanted to layout square (90°) corners to their fields. To solve this problem about 2000 BC they discovered the 'magic' of the 3-4-5 triangle.

  16. A cubic map chaos criterion theorem with applications in generalized synchronization based pseudorandom number generator and image encryption

    Yang, Xiuping, E-mail: yangxiuping-1990@163.com; Min, Lequan, E-mail: minlequan@sina.com; Wang, Xue, E-mail: wangxue-20130818@163.com [Schools of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083 (China)

    2015-05-15

    This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2{sup 1345}. As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.

  17. A cubic map chaos criterion theorem with applications in generalized synchronization based pseudorandom number generator and image encryption.

    Yang, Xiuping; Min, Lequan; Wang, Xue

    2015-05-01

    This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2(1345). As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.

  18. Topological interpretation of Luttinger theorem

    Seki, Kazuhiro; Yunoki, Seiji

    2017-01-01

    Based solely on the analytical properties of the single-particle Green's function of fermions at finite temperatures, we show that the generalized Luttinger theorem inherently possesses topological aspects. The topological interpretation of the generalized Luttinger theorem can be introduced because i) the Luttinger volume is represented as the winding number of the single-particle Green's function and thus ii) the deviation of the theorem, expressed with a ratio between the interacting and n...

  19. Fuzzy stochastic generalized reliability studies on embankment systems based on first-order approximation theorem

    Wang Yajun

    2008-12-01

    Full Text Available In order to address the complex uncertainties caused by interfacing between the fuzziness and randomness of the safety problem for embankment engineering projects, and to evaluate the safety of embankment engineering projects more scientifically and reasonably, this study presents the fuzzy logic modeling of the stochastic finite element method (SFEM based on the harmonious finite element (HFE technique using a first-order approximation theorem. Fuzzy mathematical models of safety repertories were introduced into the SFEM to analyze the stability of embankments and foundations in order to describe the fuzzy failure procedure for the random safety performance function. The fuzzy models were developed with membership functions with half depressed gamma distribution, half depressed normal distribution, and half depressed echelon distribution. The fuzzy stochastic mathematical algorithm was used to comprehensively study the local failure mechanism of the main embankment section near Jingnan in the Yangtze River in terms of numerical analysis for the probability integration of reliability on the random field affected by three fuzzy factors. The result shows that the middle region of the embankment is the principal zone of concentrated failure due to local fractures. There is also some local shear failure on the embankment crust. This study provides a referential method for solving complex multi-uncertainty problems in engineering safety analysis.

  20. Myocardial imaging with 201Tl: an analysis of clinical usefulness based on Bayes' theorem

    Hamilton, G.W.; Trobaugh, G.B.; Ritchie, J.L.; Gould, K.L.; DeRouen, T.A.; Williams, D.L.

    1978-01-01

    Rest-exercise thallium-201 ( 201 Tl) myocardial imaging and rest-exercise electrocardiography were performed in 137 patients with suspected coronary artery disease (CAD). The final diagnosis of coronary disease was made by arteriography. Sensitivity and specificity for the ECG and thallium studies alone or combined were then determined. Based on these data, the posttest probability of CAD with a normal or abnormal test was calculated using Bayes' theorem for disease prevalences ranging from 1% to 99%. The difference between the probability of disease with a normal test and the probability of disease with an abnormal test was also calculated for each prevalence range. The results demonstrate that 201 Tl imaging discriminates between disease absence or presence better than does the ECG. However, both the ECG and thallium studies provide rather poor discrimination between disease and no disease when the disease prevalence is low (less than 0.20) or high (greater than 0.70). Because of this characteristic, it is unlikely that screening tests for CAD will prove useful unless the disease prevalence in the group under study is in the moderate (0.20 to 0.70) range

  1. Passivity Based Stabilization of Non-minimum Phase Nonlinear Systems

    Travieso-Torres, J.C.; Duarte-Mermoud, M.A.; Zagalak, Petr

    2009-01-01

    Roč. 45, č. 3 (2009), s. 417-426 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GA102/07/1596 Institutional research plan: CEZ:AV0Z10750506 Keywords : nonlinear systems * stabilisation * passivity * state feedback Subject RIV: BC - Control Systems Theory Impact factor: 0.445, year: 2009 http://library.utia.cas.cz/separaty/2009/AS/zagalak-passivity based stabilization of non-minimum phase nonlinear systems.pdf

  2. Parity proofs of the Kochen–Specker theorem based on 60 complex rays in four dimensions

    Waegell, Mordecai; Aravind, P K

    2011-01-01

    It is pointed out that the 60 complex rays in four dimensions associated with a system of two qubits yield over 10 9 critical parity proofs of the Kochen–Specker theorem. The geometrical properties of the rays are described, an overview of the parity proofs contained in them is given and examples of some of the proofs are exhibited. (paper)

  3. Acceleration theorems

    Palmer, R.

    1994-06-01

    Electromagnetic fields can be separated into near and far components. Near fields are extensions of static fields. They do not radiate, and they fall off more rapidly from a source than far fields. Near fields can accelerate particles, but the ratio of acceleration to source fields at a distance R, is always less than R/λ or 1, whichever is smaller. Far fields can be represented as sums of plane parallel, transversely polarized waves that travel at the velocity of light. A single such wave in a vacuum cannot give continuous acceleration, and it is shown that no sums of such waves can give net first order acceleration. This theorem is proven in three different ways; each method showing a different aspect of the situation

  4. Some basic theorems on the cross-sums of certain class of numbers (M-1) when the operations are done with different bases M of the arithmetic

    Ozoemena, P.C.; Onwumechili, C.A.

    1988-11-01

    Some new theorems have been propounded for the numbers (M-1), as they relate to other numerals, through the basic arithmetical operations, at different bases M. For some reason, we give the proof of the theorems for the case M=10 using mathematical induction, and by Peano's fifth axiom make our generalizations. Comments are made in respect of the numbers (M-1), (in this case 9). Apart from our theorems facilitating mathematical operations, evidences have also been given, from different sources of the interesting properties of this class of numbers, represented in our own case by the numeral 9. The theorems neither violate the divisibility rule for 9 nor are they a consequence of it. From symmetry, a suggestion is made in respect of the possible origin of the numeration in base 10, and the case of a ten dimensional Universe reconsidered. (author). 18 refs, 1 fig., 4 tabs

  5. The Non-Signalling theorem in generalizations of Bell's theorem

    Walleczek, J.; Grössing, G.

    2014-04-01

    Does "epistemic non-signalling" ensure the peaceful coexistence of special relativity and quantum nonlocality? The possibility of an affirmative answer is of great importance to deterministic approaches to quantum mechanics given recent developments towards generalizations of Bell's theorem. By generalizations of Bell's theorem we here mean efforts that seek to demonstrate the impossibility of any deterministic theories to obey the predictions of Bell's theorem, including not only local hidden-variables theories (LHVTs) but, critically, of nonlocal hidden-variables theories (NHVTs) also, such as de Broglie-Bohm theory. Naturally, in light of the well-established experimental findings from quantum physics, whether or not a deterministic approach to quantum mechanics, including an emergent quantum mechanics, is logically possible, depends on compatibility with the predictions of Bell's theorem. With respect to deterministic NHVTs, recent attempts to generalize Bell's theorem have claimed the impossibility of any such approaches to quantum mechanics. The present work offers arguments showing why such efforts towards generalization may fall short of their stated goal. In particular, we challenge the validity of the use of the non-signalling theorem as a conclusive argument in favor of the existence of free randomness, and therefore reject the use of the non-signalling theorem as an argument against the logical possibility of deterministic approaches. We here offer two distinct counter-arguments in support of the possibility of deterministic NHVTs: one argument exposes the circularity of the reasoning which is employed in recent claims, and a second argument is based on the inconclusive metaphysical status of the non-signalling theorem itself. We proceed by presenting an entirely informal treatment of key physical and metaphysical assumptions, and of their interrelationship, in attempts seeking to generalize Bell's theorem on the basis of an ontic, foundational

  6. A tokamak with nearly uniform coil stress based on virial theorem

    Tsutsui, H.

    2002-01-01

    A novel tokamak concept with a new type of toroidal field (TF) coils and a central solenoid (CS) whose stress is much reduced to a theoretical limit determined by the virial theorem has been devised. Recently, we had developed a tokamak with force-balanced coils (FBCs) which are multi-pole helical hybrid coils combining TF coils and a CS coil. The combination reduces the net electromagnetic force in the direction of major radius. In this work, we have extended the FBC concept using the virial theorem. High-field coils should accordingly have same averaged principal stresses in all directions, whereas conventional FBC reduces stress in the toroidal direction only. Using a shell model, we have obtained the poloidal rotation number of helical coils which satisfy the uniform stress condition, and named the coil as virial-limited coil (VLC). VLC with circular cross section of aspect ratio A=2 reduces maximum stress to 60% compared with that of TF coils. In order to prove the advantage of VLC concept, we have designed a small VLC tokamak Todoroki-II. The plasma discharge in Todoroki-II will be presented. (author)

  7. Minimum pressure for sustained combustion in AN-based emulsions

    Goldthorp, S.; Turcotte, R.; Badeen, C.M. [Natural Resources Canada, Ottawa, ON (Canada). Canadian Explosives Research Laboratory; Chan, S.K. [Orica Canada Inc., Brownsburg-Chatham, PQ (Canada)

    2008-04-15

    AN-based emulsions have been involved in a relatively high number of accidental explosions related to pumping operations during their manufacture, transfer and handling. The minimum burning pressure (MBP) of emulsions is used to estimate safe operating pressures for pumping and mixing equipment. This study examined testing protocols conducted to measure MBP values. Factors contributing to uncertainties in MBP data were examined, and a measurement methodology designed to incorporate the uncertainties was presented. MBP measurements obtained for 5 different AN-based emulsions in high pressure vessels were also provided, and the impact of various ingredients on MBP values was discussed. Bench-scale experiments and time current pulse tests were conducted to examine thermal ignition behaviour. The emulsions exhibited MBP values that ranged from 580 to 6510 kPa. Results of the study suggested that ingredients play a significant role on MBP values. A relatively high energy flux was required to induce stable combustion fronts in the emulsions. Large air voids containing flammable atmospheres were able to provide sufficient energy to ignite the emulsions. It was concluded that a knowledge of the MBP of emulsions is needed to ensure that corresponding pumping operations are conducted at pressures below the MBP. 11 refs., 2 tabs., 8 figs.

  8. The Levinson theorem

    Ma Zhongqi

    2006-01-01

    The Levinson theorem is a fundamental theorem in quantum scattering theory, which shows the relation between the number of bound states and the phase shift at zero momentum for the Schroedinger equation. The Levinson theorem was established and developed mainly with the Jost function, with the Green function and with the Sturm-Liouville theorem. In this review, we compare three methods of proof, study the conditions of the potential for the Levinson theorem and generalize it to the Dirac equation. The method with the Sturm-Liouville theorem is explained in some detail. References to development and application of the Levinson theorem are introduced. (topical review)

  9. The Levy sections theorem revisited

    Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Silva, Sergio Da

    2007-01-01

    This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets

  10. The Levy sections theorem revisited

    Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio

    2007-06-01

    This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.

  11. Generalized Dandelin’s Theorem

    Kheyfets, A. L.

    2017-11-01

    The paper gives a geometric proof of the theorem which states that in case of the plane section of a second-order surface of rotation (quadrics of rotation, QR), such conics as an ellipse, a hyperbola or a parabola (types of conic sections) are formed. The theorem supplements the well-known Dandelin’s theorem which gives the geometric proof only for a circular cone and applies the proof to all QR, namely an ellipsoid, a hyperboloid, a paraboloid and a cylinder. That’s why the considered theorem is known as the generalized Dandelin’s theorem (GDT). The GDT proof is based on a relatively unknown generalized directrix definition (GDD) of conics. The work outlines the GDD proof for all types of conics as their necessary and sufficient condition. Based on the GDD, the author proves the GDT for all QR in case of a random position of the cutting plane. The graphical stereometric structures necessary for the proof are given. The implementation of the structures by 3d computer methods is considered. The article shows the examples of the builds made in the AutoCAD package. The theorem is intended for the training course of theoretical training of elite student groups of architectural and construction specialties.

  12. Extended Gersgorin Theorem-Based Parameter Feasible Domain to Prevent Harmonic Resonance in Power Grid

    Tao Lin

    2017-10-01

    Full Text Available Harmonic resonance may cause abnormal operation and even damage of power facilities, further threatening normal and safe operation of power systems. For renewable energy generations, controlled loads and parallel reactive power compensating equipment, their operating statuses can vary frequently. Therefore, the parameters of equivalent fundamental and harmonic admittance/impedance of these components exist in uncertainty, which will change the elements and eigenvalues of harmonic network admittance matrix. Consequently, harmonic resonance in power grid is becoming increasingly more complex. Hence, intense research about prevention and suppression of harmonic resonance, particularly the parameter feasible domain (PFD which can keep away from harmonic resonance, are needed. For rapid online evaluation of PFD, a novel method without time-consuming pointwise precise eigenvalue computations is proposed. By analyzing the singularity of harmonic network admittance matrix, the explicit sufficient condition that the matrix elements should meet to prevent harmonic resonance is derived by the extended Gersgorin theorem. Further, via the non-uniqueness of similar transformation matrix (STM, a strategy to determine the appropriate STM is proposed to minimize the conservation of the obtained PFD. Eventually, the availability and advantages in computation efficiency and conservation of the method, are demonstrated through four different scale benchmarks.

  13. Fermat's Last Theorem A Theorem at Last!

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 1. Fermat's Last Theorem A Theorem at Last! C S Yogananda. General Article Volume 1 Issue 1 January 1996 pp 71-79. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/001/01/0071-0079 ...

  14. Gap and density theorems

    Levinson, N

    1940-01-01

    A typical gap theorem of the type discussed in the book deals with a set of exponential functions { \\{e^{{{i\\lambda}_n} x}\\} } on an interval of the real line and explores the conditions under which this set generates the entire L_2 space on this interval. A typical gap theorem deals with functions f on the real line such that many Fourier coefficients of f vanish. The main goal of this book is to investigate relations between density and gap theorems and to study various cases where these theorems hold. The author also shows that density- and gap-type theorems are related to various propertie

  15. Bit-Blasting ACL2 Theorems

    Sol Swords

    2011-10-01

    Full Text Available Interactive theorem proving requires a lot of human guidance. Proving a property involves (1 figuring out why it holds, then (2 coaxing the theorem prover into believing it. Both steps can take a long time. We explain how to use GL, a framework for proving finite ACL2 theorems with BDD- or SAT-based reasoning. This approach makes it unnecessary to deeply understand why a property is true, and automates the process of admitting it as a theorem. We use GL at Centaur Technology to verify execution units for x86 integer, MMX, SSE, and floating-point arithmetic.

  16. A Minimum Path Algorithm Among 3D-Polyhedral Objects

    Yeltekin, Aysin

    1989-03-01

    In this work we introduce a minimum path theorem for 3D case. We also develop an algorithm based on the theorem we prove. The algorithm will be implemented on the software package we develop using C language. The theorem we introduce states that; "Given the initial point I, final point F and S be the set of finite number of static obstacles then an optimal path P from I to F, such that PA S = 0 is composed of straight line segments which are perpendicular to the edge segments of the objects." We prove the theorem as well as we develop the following algorithm depending on the theorem to find the minimum path among 3D-polyhedral objects. The algorithm generates the point Qi on edge ei such that at Qi one can find the line which is perpendicular to the edge and the IF line. The algorithm iteratively provides a new set of initial points from Qi and exploits all possible paths. Then the algorithm chooses the minimum path among the possible ones. The flowchart of the program as well as the examination of its numerical properties are included.

  17. Dynamic Newton-Puiseux Theorem

    Mannaa, Bassel; Coquand, Thierry

    2013-01-01

    A constructive version of Newton-Puiseux theorem for computing the Puiseux expansions of algebraic curves is presented. The proof is based on a classical proof by Abhyankar. Algebraic numbers are evaluated dynamically; hence the base field need not be algebraically closed and a factorization...

  18. Green's Theorem for Sign Data

    Houston, Louis M.

    2012-01-01

    Sign data are the signs of signal added to noise. It is well known that a constant signal can be recovered from sign data. In this paper, we show that an integral over variant signal can be recovered from an integral over sign data based on the variant signal. We refer to this as a generalized sign data average. We use this result to derive a Green's theorem for sign data. Green's theorem is important to various seismic processing methods, including seismic migration. Results in this paper ge...

  19. Nonperturbative Adler-Bardeen theorem

    Mastropietro, Vieri

    2007-01-01

    The Adler-Bardeen theorem has been proven only as a statement valid at all orders in perturbation theory, without any control on the convergence of the series. In this paper we prove a nonperturbative version of the Adler-Bardeen theorem in d=2 by using recently developed technical tools in the theory of Grassmann integration. The proof is based on the assumption that the boson propagator decays fast enough for large momenta. If the boson propagator does not decay, as for Thirring contact interactions, the anomaly in the WI (Ward Identities) is renormalized by higher order contributions

  20. The Pomeranchuk theorem and its modifications

    Fischer, J.; Saly, R.

    1980-01-01

    A review of the various modifications and improvements of the Pomeranchuk theorem and also of related statements is given. The present status of the Pomeranchuk relation based on dispersion relation is discussed. Numerous problems related to the Pomeranchuk theorem and some answers to these problems are collected in a clear table

  1. Coalgebraic Lindström Theorems

    Kurz, A.; Venema, Y.

    2010-01-01

    We study modal Lindström theorems from a coalgebraic perspective. We provide three different Lindström theorems for coalgebraic logic, one of which is a direct generalisation of de Rijke's result for Kripke models. Both the other two results are based on the properties of bisimulation invariance,

  2. The Patchwork Divergence Theorem

    Dray, Tevian; Hellaby, Charles

    1994-01-01

    The divergence theorem in its usual form applies only to suitably smooth vector fields. For vector fields which are merely piecewise smooth, as is natural at a boundary between regions with different physical properties, one must patch together the divergence theorem applied separately in each region. We give an elegant derivation of the resulting "patchwork divergence theorem" which is independent of the metric signature in either region, and which is thus valid if the signature changes. (PA...

  3. Adaptive fuzzy control of a class of nonaffine nonlinear system with input saturation based on passivity theorem.

    Molavi, Ali; Jalali, Aliakbar; Ghasemi Naraghi, Mahdi

    2017-07-01

    In this paper, based on the passivity theorem, an adaptive fuzzy controller is designed for a class of unknown nonaffine nonlinear systems with arbitrary relative degree and saturation input nonlinearity to track the desired trajectory. The system equations are in normal form and its unforced dynamic may be unstable. As relative degree one is a structural obstacle in system passivation approach, in this paper, backstepping method is used to circumvent this obstacle and passivate the system step by step. Because of the existence of uncertainty and disturbance in the system, exact passivation and reference tracking cannot be tackled, so the approximate passivation or passivation with respect to a set is obtained to hold the tracking error in a neighborhood around zero. Furthermore, in order to overcome the non-smoothness of the saturation input nonlinearity, a parametric smooth nonlinear function with arbitrary approximation error is used to approximate the input saturation. Finally, the simulation results for the theoretical and practical examples are given to validate the proposed controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A general comparison theorem for backward stochastic differential equations

    Cohen, Samuel N.; Elliott, Robert J.; Pearce, Charles E. M.

    2010-01-01

    A useful result when dealing with backward stochastic differential equations is the comparison theorem of Peng (1992). When the equations are not based on Brownian motion, the comparison theorem no longer holds in general. In this paper we present a condition for a comparison theorem to hold for backward stochastic differential equations based on arbitrary martingales. This theorem applies to both vector and scalar situations. Applications to the theory of nonlinear expectat...

  5. Double minimum creep of single crystal Ni-base superalloys

    WU, X.; Wollgramm, P.; Somsen, C.; Dlouhý, Antonín; Kostka, A.; Eggeler, G.

    2016-01-01

    Roč. 112, JUN (2016), s. 242-260 ISSN 1359-6454 R&D Projects: GA ČR(CZ) GA14-22834S Institutional support: RVO:68081723 Keywords : Single crystal Ni-base superalloys * Primary creep * Transmission electron microscopy * Dislocations * Stacking faults Subject RIV: JG - Metallurgy Impact factor: 5.301, year: 2016

  6. Linear electrical circuits. Definitions - General theorems; Circuits electriques lineaires. Definitions - Theoremes generaux

    Escane, J.M. [Ecole Superieure d' Electricite, 91 - Gif-sur-Yvette (France)

    2005-04-01

    The first part of this article defines the different elements of an electrical network and the models to represent them. Each model involves the current and the voltage as a function of time. Models involving time functions are simple but their use is not always easy. The Laplace transformation leads to a more convenient form where the variable is no more directly the time. This transformation leads also to the notion of transfer function which is the object of the second part. The third part aims at defining the fundamental operation rules of linear networks, commonly named 'general theorems': linearity principle and superimposition theorem, duality principle, Thevenin theorem, Norton theorem, Millman theorem, triangle-star and star-triangle transformations. These theorems allow to study complex power networks and to simplify the calculations. They are based on hypotheses, the first one is that all networks considered in this article are linear. (J.S.)

  7. Search strategy for theorem proving in artificial systems. I

    Lovitskii, V A; Barenboim, M S

    1981-01-01

    A strategy is contrived, employing the language of finite-order predicate calculus, for finding proofs of theorems. A theorem is formulated, based on 2 known theorems on purity and absorption, and used to determine 5 properties of a set of propositions. 3 references.

  8. On the Minimum Cable Tensions for the Cable-Based Parallel Robots

    Peng Liu

    2014-01-01

    Full Text Available This paper investigates the minimum cable tension distributions in the workspace for cable-based parallel robots to find out more information on the stability. First, the kinematic model of a cable-based parallel robot is derived based on the wrench matrix. Then, a noniterative polynomial-based optimization algorithm with the proper optimal objective function is presented based on the convex optimization theory, in which the minimum cable tension at any pose is determined. Additionally, three performance indices are proposed to show the distributions of the minimum cable tensions in a specified region of the workspace. An important thing is that the three performance indices can be used to evaluate the stability of the cable-based parallel robots. Furthermore, a new workspace, the Specified Minimum Cable Tension Workspace (SMCTW, is introduced, within which all the minimum tensions exceed a specified value, therefore meeting the specified stability requirement. Finally, a camera robot parallel driven by four cables for aerial panoramic photographing is selected to illustrate the distributions of the minimum cable tensions in the workspace and the relationship between the three performance indices and the stability.

  9. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  10. The relativistic virial theorem

    Lucha, W.; Schoeberl, F.F.

    1989-11-01

    The relativistic generalization of the quantum-mechanical virial theorem is derived and used to clarify the connection between the nonrelativistic and (semi-)relativistic treatment of bound states. 12 refs. (Authors)

  11. Wigner's Symmetry Representation Theorem

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 10. Wigner's Symmetry Representation Theorem: At the Heart of Quantum Field Theory! Aritra Kr Mukhopadhyay. General Article Volume 19 Issue 10 October 2014 pp 900-916 ...

  12. Parameters Tuning of Model Free Adaptive Control Based on Minimum Entropy

    Chao Ji; Jing Wang; Liulin Cao; Qibing Jin

    2014-01-01

    Dynamic linearization based model free adaptive control(MFAC) algorithm has been widely used in practical systems, in which some parameters should be tuned before it is successfully applied to process industries. Considering the random noise existing in real processes, a parameter tuning method based on minimum entropy optimization is proposed,and the feature of entropy is used to accurately describe the system uncertainty. For cases of Gaussian stochastic noise and non-Gaussian stochastic noise, an entropy recursive optimization algorithm is derived based on approximate model or identified model. The extensive simulation results show the effectiveness of the minimum entropy optimization for the partial form dynamic linearization based MFAC. The parameters tuned by the minimum entropy optimization index shows stronger stability and more robustness than these tuned by other traditional index,such as integral of the squared error(ISE) or integral of timeweighted absolute error(ITAE), when the system stochastic noise exists.

  13. Nonextensive Pythagoras' Theorem

    Dukkipati, Ambedkar

    2006-01-01

    Kullback-Leibler relative-entropy, in cases involving distributions resulting from relative-entropy minimization, has a celebrated property reminiscent of squared Euclidean distance: it satisfies an analogue of the Pythagoras' theorem. And hence, this property is referred to as Pythagoras' theorem of relative-entropy minimization or triangle equality and plays a fundamental role in geometrical approaches of statistical estimation theory like information geometry. Equvalent of Pythagoras' theo...

  14. Some approximation theorems

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Abstract. The general theme of this note is illustrated by the following theorem: Theorem 1. Suppose K is a compact set in the complex plane and 0 belongs to the boundary ∂K. Let A(K) denote the space of all functions f on K such that f is holo- morphic in a neighborhood of K and f(0) = 0. Also for any given positive integer ...

  15. Quark confinement: Dual superconductor picture based on a non-Abelian Stokes theorem and reformulations of Yang-Mills theory

    Kondo, Kei-Ichi; Kato, Seikou; Shibata, Akihiro; Shinohara, Toru

    2015-05-01

    The purpose of this paper is to review the recent progress in understanding quark confinement. The emphasis of this review is placed on how to obtain a manifestly gauge-independent picture for quark confinement supporting the dual superconductivity in the Yang-Mills theory, which should be compared with the Abelian projection proposed by 't Hooft. The basic tools are novel reformulations of the Yang-Mills theory based on change of variables extending the decomposition of the SU(N) Yang-Mills field due to Cho, Duan-Ge and Faddeev-Niemi, together with the combined use of extended versions of the Diakonov-Petrov version of the non-Abelian Stokes theorem for the SU(N) Wilson loop operator. Moreover, we give the lattice gauge theoretical versions of the reformulation of the Yang-Mills theory which enables us to perform the numerical simulations on the lattice. In fact, we present some numerical evidences for supporting the dual superconductivity for quark confinement. The numerical simulations include the derivation of the linear potential for static interquark potential, i.e., non-vanishing string tension, in which the "Abelian" dominance and magnetic monopole dominance are established, confirmation of the dual Meissner effect by measuring the chromoelectric flux tube between quark-antiquark pair, the induced magnetic-monopole current, and the type of dual superconductivity, etc. In addition, we give a direct connection between the topological configuration of the Yang-Mills field such as instantons/merons and the magnetic monopole. We show especially that magnetic monopoles in the Yang-Mills theory can be constructed in a manifestly gauge-invariant way starting from the gauge-invariant Wilson loop operator and thereby the contribution from the magnetic monopoles can be extracted from the Wilson loop in a gauge-invariant way through the non-Abelian Stokes theorem for the Wilson loop operator, which is a prerequisite for exhibiting magnetic monopole dominance for quark

  16. Studies on Bell's theorem

    Guney, Veli Ugur

    In this work we look for novel classes of Bell's inequalities and methods to produce them. We also find their quantum violations including, if possible, the maximum one. The Jordan bases method that we explain in Chapter 2 is about using a pair of certain type of orthonormal bases whose spans are subspaces related to measurement outcomes of incompatible quantities on the same physical system. Jordan vectors are the briefest way of expressing the relative orientation of any two subspaces. This feature helps us to reduce the dimensionality of the parameter space on which we do searches for optimization. The work is published in [24]. In Chapter 3, we attempt to find a connection between group theory and Bell's theorem. We devise a way of generating terms of a Bell's inequality that are related to elements of an algebraic group. The same group generates both the terms of the Bell's inequality and the observables that are used to calculate the quantum value of the Bell expression. Our results are published in [25][26]. In brief, Bell's theorem is the main tool of a research program that was started by Einstein, Podolsky, Rosen [19] and Bohr [8] in the early days of quantum mechanics in their discussions about the core nature of physical systems. These debates were about a novel type of physical states called superposition states, which are introduced by quantum mechanics and manifested in the apparent inevitable randomness in measurement outcomes of identically prepared systems. Bell's huge contribution was to find a means of quantifying the problem and hence of opening the way to experimental verification by rephrasing the questions as limits on certain combinations of correlations between measurement results of spatially separate systems [7]. Thanks to Bell, the fundamental questions related to the nature of quantum mechanical systems became quantifiable [6]. According to Bell's theorem, some correlations between quantum entangled systems that involve incompatible

  17. Singularity theorems from weakened energy conditions

    Fewster, Christopher J; Galloway, Gregory J

    2011-01-01

    We establish analogues of the Hawking and Penrose singularity theorems based on (a) averaged energy conditions with exponential damping; (b) conditions on local stress-energy averages inspired by the quantum energy inequalities satisfied by a number of quantum field theories. As particular applications, we establish singularity theorems for the Einstein equations coupled to a classical scalar field, which violates the strong energy condition, and the nonminimally coupled scalar field, which also violates the null energy condition.

  18. Complex proofs of real theorems

    Lax, Peter D

    2011-01-01

    Complex Proofs of Real Theorems is an extended meditation on Hadamard's famous dictum, "The shortest and best way between two truths of the real domain often passes through the imaginary one." Directed at an audience acquainted with analysis at the first year graduate level, it aims at illustrating how complex variables can be used to provide quick and efficient proofs of a wide variety of important results in such areas of analysis as approximation theory, operator theory, harmonic analysis, and complex dynamics. Topics discussed include weighted approximation on the line, Müntz's theorem, Toeplitz operators, Beurling's theorem on the invariant spaces of the shift operator, prediction theory, the Riesz convexity theorem, the Paley-Wiener theorem, the Titchmarsh convolution theorem, the Gleason-Kahane-Żelazko theorem, and the Fatou-Julia-Baker theorem. The discussion begins with the world's shortest proof of the fundamental theorem of algebra and concludes with Newman's almost effortless proof of the prime ...

  19. Decoding and finding the minimum distance with Gröbner bases : history and new insights

    Bulygin, S.; Pellikaan, G.R.; Woungang, I.; Misra, S.; Misra, S.C.

    2010-01-01

    In this chapter, we discuss decoding techniques and finding the minimum distance of linear codes with the use of Grobner bases. First, we give a historical overview of decoding cyclic codes via solving systems polynominal equations over finite fields. In particular, we mention papers of Cooper,.

  20. Definable davies' theorem

    Törnquist, Asger Dag; Weiss, W.

    2009-01-01

    We prove the following descriptive set-theoretic analogue of a theorem of R. 0. Davies: Every σ function f:ℝ × ℝ → ℝ can be represented as a sum of rectangular Σ functions if and only if all reals are constructible.......We prove the following descriptive set-theoretic analogue of a theorem of R. 0. Davies: Every σ function f:ℝ × ℝ → ℝ can be represented as a sum of rectangular Σ functions if and only if all reals are constructible....

  1. Converse Barrier Certificate Theorem

    Wisniewski, Rafael; Sloth, Christoffer

    2013-01-01

    This paper presents a converse barrier certificate theorem for a generic dynamical system.We show that a barrier certificate exists for any safe dynamical system defined on a compact manifold. Other authors have developed a related result, by assuming that the dynamical system has no singular...... points in the considered subset of the state space. In this paper, we redefine the standard notion of safety to comply with generic dynamical systems with multiple singularities. Afterwards, we prove the converse barrier certificate theorem and illustrate the differences between ours and previous work...

  2. INTEGRATING CASE-BASED REASONING, KNOWLEDGE-BASED APPROACH AND TSP ALGORITHM FOR MINIMUM TOUR FINDING

    Hossein Erfani

    2009-07-01

    Full Text Available Imagine you have traveled to an unfamiliar city. Before you start your daily tour around the city, you need to know a good route. In Network Theory (NT, this is the traveling salesman problem (TSP. A dynamic programming algorithm is often used for solving this problem. However, when the road network of the city is very complicated and dense, which is usually the case, it will take too long for the algorithm to find the shortest path. Furthermore, in reality, things are not as simple as those stated in AT. For instance, the cost of travel for the same part of the city at different times may not be the same. In this project, we have integrated TSP algorithm with AI knowledge-based approach and case-based reasoning in solving the problem. With this integration, knowledge about the geographical information and past cases are used to help TSP algorithm in finding a solution. This approach dramatically reduces the computation time required for minimum tour finding.

  3. The Fluctuation Theorem and Dissipation Theorem for Poiseuille Flow

    Brookes, Sarah J; Reid, James C; Evans, Denis J; Searles, Debra J

    2011-01-01

    The fluctuation theorem and the dissipation theorem provide relationships to describe nonequilibrium systems arbitrarily far from, or close to equilibrium. They both rely on definition of a central property, the dissipation function. In this manuscript we apply these theorems to examine a boundary thermostatted system undergoing Poiseuille flow. The relationships are verified computationally and show that the dissipation theorem is potentially useful for study of boundary thermostatted systems consisting of complex molecules undergoing flow in the nonlinear regime.

  4. Gödel's Theorem

    Dalen, D. van

    The following pages make form a new chapter for the book Logic and Structure. This chapter deals with the incompleteness theorem, and contains enough basic material for the treatment of the required notions of computability, representability and the like. This chapter will appear in the next

  5. Cantor's Little Theorem

    eralizing the method of proof of the well known. Cantor's ... Godel's first incompleteness theorem is proved. ... that the number of elements in any finite set is a natural number. ..... proof also has a Godel number; of course, you have to fix.

  6. The Pythagoras' Theorem

    Saikia, Manjil P.

    2013-01-01

    We give a brief historical overview of the famous Pythagoras' theorem and Pythagoras. We present a simple proof of the result and dicsuss some extensions. We follow \\cite{thales}, \\cite{wiki} and \\cite{wiki2} for the historical comments and sources.

  7. Converse Barrier Certificate Theorems

    Wisniewski, Rafael; Sloth, Christoffer

    2016-01-01

    This paper shows that a barrier certificate exists for any safe dynamical system. Specifically, we prove converse barrier certificate theorems for a class of structurally stable dynamical systems. Other authors have developed a related result by assuming that the dynamical system has neither...

  8. Generalized optical theorems

    Cahill, K.

    1975-11-01

    Local field theory is used to derive formulas that express certain boundary values of the N-point function as sums of products of scattering amplitudes. These formulas constitute a generalization of the optical theorem and facilitate the analysis of multiparticle scattering functions [fr

  9. The Non-Signalling theorem in generalizations of Bell's theorem

    Walleczek, J; Grössing, G

    2014-01-01

    Does 'epistemic non-signalling' ensure the peaceful coexistence of special relativity and quantum nonlocality? The possibility of an affirmative answer is of great importance to deterministic approaches to quantum mechanics given recent developments towards generalizations of Bell's theorem. By generalizations of Bell's theorem we here mean efforts that seek to demonstrate the impossibility of any deterministic theories to obey the predictions of Bell's theorem, including not only local hidden-variables theories (LHVTs) but, critically, of nonlocal hidden-variables theories (NHVTs) also, such as de Broglie-Bohm theory. Naturally, in light of the well-established experimental findings from quantum physics, whether or not a deterministic approach to quantum mechanics, including an emergent quantum mechanics, is logically possible, depends on compatibility with the predictions of Bell's theorem. With respect to deterministic NHVTs, recent attempts to generalize Bell's theorem have claimed the impossibility of any such approaches to quantum mechanics. The present work offers arguments showing why such efforts towards generalization may fall short of their stated goal. In particular, we challenge the validity of the use of the non-signalling theorem as a conclusive argument in favor of the existence of free randomness, and therefore reject the use of the non-signalling theorem as an argument against the logical possibility of deterministic approaches. We here offer two distinct counter-arguments in support of the possibility of deterministic NHVTs: one argument exposes the circularity of the reasoning which is employed in recent claims, and a second argument is based on the inconclusive metaphysical status of the non-signalling theorem itself. We proceed by presenting an entirely informal treatment of key physical and metaphysical assumptions, and of their interrelationship, in attempts seeking to generalize Bell's theorem on the

  10. Virial theorem and hypervirial theorem in a spherical geometry

    Li Yan; Chen Jingling; Zhang Fulin

    2011-01-01

    The virial theorem in the one- and two-dimensional spherical geometry are presented in both classical and quantum mechanics. Choosing a special class of hypervirial operators, the quantum hypervirial relations in the spherical spaces are obtained. With the aid of the Hellmann-Feynman theorem, these relations can be used to formulate a perturbation theorem without wavefunctions, corresponding to the hypervirial-Hellmann-Feynman theorem perturbation theorem of Euclidean geometry. The one-dimensional harmonic oscillator and two-dimensional Coulomb system in the spherical spaces are given as two sample examples to illustrate the perturbation method. (paper)

  11. Discovering the Theorem of Pythagoras

    Lattanzio, Robert (Editor)

    1988-01-01

    In this 'Project Mathematics! series, sponsored by the California Institute of Technology, Pythagoraus' theorem a(exp 2) + b(exp 2) = c(exp 2) is discussed and the history behind this theorem is explained. hrough live film footage and computer animation, applications in real life are presented and the significance of and uses for this theorem are put into practice.

  12. A new method of optimal capacitor switching based on minimum spanning tree theory in distribution systems

    Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.

    2018-03-01

    According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.

  13. Eigenspace-Based Minimum Variance Adaptive Beamformer Combined with Delay Multiply and Sum: Experimental Study

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2017-01-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra...

  14. A remark on the energy conditions for Hawking's area theorem

    Lesourd, Martin

    2018-06-01

    Hawking's area theorem is a fundamental result in black hole theory that is universally associated with the null energy condition. That this condition can be weakened is illustrated by the formulation of a strengthened version of the theorem based on an energy condition that allows for violations of the null energy condition. With the semi-classical context in mind, some brief remarks pertaining to the suitability of the area theorem and its energy condition are made.

  15. The Surprise Examination Paradox and the Second Incompleteness Theorem

    Kritchman, Shira; Raz, Ran

    2010-01-01

    We give a new proof for Godel's second incompleteness theorem, based on Kolmogorov complexity, Chaitin's incompleteness theorem, and an argument that resembles the surprise examination paradox. We then go the other way around and suggest that the second incompleteness theorem gives a possible resolution of the surprise examination paradox. Roughly speaking, we argue that the flaw in the derivation of the paradox is that it contains a hidden assumption that one can prove the consistency of the...

  16. The equivalence theorem

    Veltman, H.

    1990-01-01

    The equivalence theorem states that, at an energy E much larger than the vector-boson mass M, the leading order of the amplitude with longitudinally polarized vector bosons on mass shell is given by the amplitude in which these vector bosons are replaced by the corresponding Higgs ghosts. We prove the equivalence theorem and show its validity in every order in perturbation theory. We first derive the renormalized Ward identities by using the diagrammatic method. Only the Feynman-- 't Hooft gauge is discussed. The last step of the proof includes the power-counting method evaluated in the large-Higgs-boson-mass limit, needed to estimate the leading energy behavior of the amplitudes involved. We derive expressions for the amplitudes involving longitudinally polarized vector bosons for all orders in perturbation theory. The fermion mass has not been neglected and everything is evaluated in the region m f ∼M much-lt E much-lt m Higgs

  17. Fully Quantum Fluctuation Theorems

    Åberg, Johan

    2018-02-01

    Systems that are driven out of thermal equilibrium typically dissipate random quantities of energy on microscopic scales. Crooks fluctuation theorem relates the distribution of these random work costs to the corresponding distribution for the reverse process. By an analysis that explicitly incorporates the energy reservoir that donates the energy and the control system that implements the dynamic, we obtain a quantum generalization of Crooks theorem that not only includes the energy changes in the reservoir but also the full description of its evolution, including coherences. Moreover, this approach opens up the possibility for generalizations of the concept of fluctuation relations. Here, we introduce "conditional" fluctuation relations that are applicable to nonequilibrium systems, as well as approximate fluctuation relations that allow for the analysis of autonomous evolution generated by global time-independent Hamiltonians. We furthermore extend these notions to Markovian master equations, implicitly modeling the influence of the heat bath.

  18. An ILP based memetic algorithm for finding minimum positive influence dominating sets in social networks

    Lin, Geng; Guan, Jian; Feng, Huibin

    2018-06-01

    The positive influence dominating set problem is a variant of the minimum dominating set problem, and has lots of applications in social networks. It is NP-hard, and receives more and more attention. Various methods have been proposed to solve the positive influence dominating set problem. However, most of the existing work focused on greedy algorithms, and the solution quality needs to be improved. In this paper, we formulate the minimum positive influence dominating set problem as an integer linear programming (ILP), and propose an ILP based memetic algorithm (ILPMA) for solving the problem. The ILPMA integrates a greedy randomized adaptive construction procedure, a crossover operator, a repair operator, and a tabu search procedure. The performance of ILPMA is validated on nine real-world social networks with nodes up to 36,692. The results show that ILPMA significantly improves the solution quality, and is robust.

  19. Multivariable Chinese Remainder Theorem

    IAS Admin

    to sleep. The 3rd thief wakes up and finds the rest of the coins make 7 equal piles excepting a coin which he pockets. If the total number of coins they stole is not more than 200, what is the exact number? With a bit of hit and miss, one can find that 157 is a possible number. The Chinese remainder theorem gives a systematic ...

  20. Dosing strategy based on prevailing aminoglycoside minimum inhibitory concentration in India: Evidence and issues

    Balaji Veeraraghavan

    2017-01-01

    Full Text Available Aminoglycosides are important agents used for treating drug-resistant infections. The current dosing regimen of aminoglycosides does not achieve sufficient serum level concentration for the infected bacterial pathogen interpreted as susceptible based on laboratory testing. Minimum inhibitory concentration was determined for nearly 2000 isolates of Enterobacteriaceae and Pseudomonas aeruginosa by broth microdilution method. Results were interpreted based on CLSI and EUCAST interpretative criteria and the inconsistencies in the susceptibility profile were noted. This study provides insights into the inconsistencies existing in the laboratory interpretation and the corresponding clinical success rates. This urges the need for revising clinical breakpoints for amikacin, to resolve under dosing leading to clinical failure.

  1. Not seeing the forest for the trees: size of the minimum spanning trees (MSTs) forest and branch significance in MST-based phylogenetic analysis.

    Teixeira, Andreia Sofia; Monteiro, Pedro T; Carriço, João A; Ramirez, Mário; Francisco, Alexandre P

    2015-01-01

    Trees, including minimum spanning trees (MSTs), are commonly used in phylogenetic studies. But, for the research community, it may be unclear that the presented tree is just a hypothesis, chosen from among many possible alternatives. In this scenario, it is important to quantify our confidence in both the trees and the branches/edges included in such trees. In this paper, we address this problem for MSTs by introducing a new edge betweenness metric for undirected and weighted graphs. This spanning edge betweenness metric is defined as the fraction of equivalent MSTs where a given edge is present. The metric provides a per edge statistic that is similar to that of the bootstrap approach frequently used in phylogenetics to support the grouping of taxa. We provide methods for the exact computation of this metric based on the well known Kirchhoff's matrix tree theorem. Moreover, we implement and make available a module for the PHYLOViZ software and evaluate the proposed metric concerning both effectiveness and computational performance. Analysis of trees generated using multilocus sequence typing data (MLST) and the goeBURST algorithm revealed that the space of possible MSTs in real data sets is extremely large. Selection of the edge to be represented using bootstrap could lead to unreliable results since alternative edges are present in the same fraction of equivalent MSTs. The choice of the MST to be presented, results from criteria implemented in the algorithm that must be based in biologically plausible models.

  2. A Comparative Study of Face Milling of D2 Steel Using Al2O3 Based Nanofluid Minimum Quantity Lubrication and Minimum Quantity Lubrication

    Muhammad Ahsan Ul Haq

    2018-03-01

    Full Text Available This study aims to investigate the effects of process parameters feed, depth of cut and flow rate, on the temperature during face milling of the D2 tool steel under two different lubricant conditions, Minimum Quantity Lubrication (MQL and Nanofluid Minimum Quantity Lubrication (NFMQL. Distilled water with the flow rate range 200-400 ml/hr was used in MQL. 2% by weight concentration of Al2O3 nanoparticles with distilled water as the base fluid used as NFMQL with same flow rate. Response surface methodology RSM central composite design CCD was used to design experiment run, modeling, and analysis. ANOVA was used for the adequacy and validation of the system. The comparison shows that NFMQL condition reduced more temperature during machining.

  3. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  4. A novel cell weighing method based on the minimum immobilization pressure for biological applications

    Zhao, Qili [Robotics and Mechatronics Research Laboratory, Department of Mechanical and Aerospace Engineering, Monash University, Clayton 3800 (Australia); Institute of Robotics and Automatic Information System, Nankai University, Tianjin 300071 (China); Shirinzadeh, Bijan [Robotics and Mechatronics Research Laboratory, Department of Mechanical and Aerospace Engineering, Monash University, Clayton 3800 (Australia); Cui, Maosheng [Biotechnology Lab of Animal Reproduction, Tianjin Animal Sciences, Tianjin 300112 (China); Sun, Mingzhu; Liu, Yaowei; Zhao, Xin, E-mail: zhaoxin@nankai.edu.cn [Institute of Robotics and Automatic Information System, Nankai University, Tianjin 300071 (China)

    2015-07-28

    A novel weighing method for cells with spherical and other regular shapes is proposed in this paper. In this method, the relationship between the cell mass and the minimum aspiration pressure to immobilize the cell (referred to as minimum immobilization pressure) is derived for the first time according to static theory. Based on this relationship, a robotic cell weighing process is established using a traditional micro-injection system. Experimental results on porcine oocytes demonstrate that the proposed method is able to weigh cells at an average speed of 16.3 s/cell and with a success rate of more than 90%. The derived cell mass and density are in accordance with those reported in other published results. The experimental results also demonstrated that this method is able to detect less than 1% variation of the porcine oocyte mass quantitatively. It can be conducted by a pair of traditional micropipettes and a commercial pneumatic micro-injection system, and is expected to perform robotic operation on batch cells. At present, the minimum resolution of the proposed method for measuring the cell mass can be 1.25 × 10{sup −15 }kg. Above advantages make it very appropriate for quantifying the amount of the materials injected into or moved out of the cells in the biological applications, such as nuclear enucleations and embryo microinjections.

  5. Protograph based LDPC codes with minimum distance linearly growing with block size

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  6. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  7. A novel cell weighing method based on the minimum immobilization pressure for biological applications

    Zhao, Qili; Shirinzadeh, Bijan; Cui, Maosheng; Sun, Mingzhu; Liu, Yaowei; Zhao, Xin

    2015-01-01

    A novel weighing method for cells with spherical and other regular shapes is proposed in this paper. In this method, the relationship between the cell mass and the minimum aspiration pressure to immobilize the cell (referred to as minimum immobilization pressure) is derived for the first time according to static theory. Based on this relationship, a robotic cell weighing process is established using a traditional micro-injection system. Experimental results on porcine oocytes demonstrate that the proposed method is able to weigh cells at an average speed of 16.3 s/cell and with a success rate of more than 90%. The derived cell mass and density are in accordance with those reported in other published results. The experimental results also demonstrated that this method is able to detect less than 1% variation of the porcine oocyte mass quantitatively. It can be conducted by a pair of traditional micropipettes and a commercial pneumatic micro-injection system, and is expected to perform robotic operation on batch cells. At present, the minimum resolution of the proposed method for measuring the cell mass can be 1.25 × 10 −15  kg. Above advantages make it very appropriate for quantifying the amount of the materials injected into or moved out of the cells in the biological applications, such as nuclear enucleations and embryo microinjections

  8. Minimum emissions from biomass FBC. Improved energy generation based on biomass FBC with minimum emission. Final report

    Hallgren, A. [TPS Termiska Processer AB, Nykoeping (Sweden)

    2002-02-01

    The primary aim of the project is to improve the performance of biomass fired FBC (fluidised bed combustion) through a concurrent detailed experimental and modelling approach. The expected results shall establish in experimental investigations, the thermochemical performance of a selection of fuels separately and in combination with suitable bed materials, stipulate recommendations, based on labscale via test rig and pilot scale to commercial scale investigations, how to repress agglomeration and defluidisation in fluidised bed combustion systems, indicate, based on the experimental findings, how to utilise primary measures to minimise the formation of nitrogen oxide compounds in the FB and provide a logistic assessment, based on case studies, identifying optimum logistic strategies for the selected fuels in commercial heat and power production. The investigation programme comprises straw, meat and bone meal (MBM) and forest residues as biofuels, quartz sand, bone ash, magnesium oxide and mullite as bed materials, sodium and ammonia carbonate as NO{sub x} reduction additives, and dolomite, kaolinite and coal ash for suppression of bed defluidisation. All materials have undergone a very detailed characterisation programme generating basic data on their chemical and structural composition as well as their sintering propensities. Combustion residues such as bottom and fly ashes have run through the same characterisation programme. The knowledge obtained by the characterisation programme supports the experimental combustion campaigns which are performed at 20, 90 and 350 kW FBC reactors. The information produced is validated in a 3 MW and 25 MW commercial FBC reactor. NO{sub x} formation and destruction mechanisms and rates have been included in a 3-D CFD software code used for NO{sub x} formation modelling. Parameter assessments confirmed the theoretical achievement of a 20-30 % reduction of NO{sub x} formation through implementation of the alkali injection concept as

  9. Minimum area requirements for an at-risk butterfly based on movement and demography.

    Brown, Leone M; Crone, Elizabeth E

    2016-02-01

    Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.

  10. A minimum spanning forest based classification method for dedicated breast CT images

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei

    2015-01-01

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting model used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging

  11. A Pontryagin Minimum Principle-Based Adaptive Equivalent Consumption Minimum Strategy for a Plug-in Hybrid Electric Bus on a Fixed Route

    Shaobo Xie

    2017-09-01

    Full Text Available When developing a real-time energy management strategy for a plug-in hybrid electric vehicle, it is still a challenge for the Equivalent Consumption Minimum Strategy to achieve near-optimal energy consumption, because the optimal equivalence factor is not readily available without the trip information. With the help of realistic speeding profiles sampled from a plug-in hybrid electric bus running on a fixed commuting line, this paper proposes a convenient and effective approach of determining the equivalence factor for an adaptive Equivalent Consumption Minimum Strategy. Firstly, with the adaptive law based on the feedback of battery SOC, the equivalence factor is described as a combination of the major component and tuning component. In particular, the major part defined as a constant is applied to the inherent consistency of regular speeding profiles, while the second part including a proportional and integral term can slightly tune the equivalence factor to satisfy the disparity of daily running cycles. Moreover, Pontryagin’s Minimum Principle is employed and solved by using the shooting method to capture the co-state dynamics, in which the Secant method is introduced to adjust the initial co-state value. And then the initial co-state value in last shooting is taken as the optimal stable constant of equivalence factor. Finally, altogether ten successive driving profiles are selected with different initial SOC levels to evaluate the proposed method, and the results demonstrate the excellent fuel economy compared with the dynamic programming and PMP method.

  12. Linear regression based on Minimum Covariance Determinant (MCD) and TELBS methods on the productivity of phytoplankton

    Gusriani, N.; Firdaniza

    2018-03-01

    The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.

  13. Elastic hadron scattering and optical theorem

    Lokajicek, Milos V.; Prochazka, Jiri

    2014-01-01

    In principle all contemporary phenomenological models of elastic hadronic scattering have been based on the assumption of optical theorem validity that has been overtaken from optics. It will be shown that the given theorem which has not been actually proved cannot be applied to short-ranged strong interactions in any case. The actual progress in description of collision processes might then exist only if the initial states are specified on the basis of impact parameter values of colliding particles and probability dependence on this parameter is established.

  14. Goedel's theorem and leapfrog

    Lloyd, Mark Anthony

    1999-01-01

    We in the nuclear power industry consider ourselves to be at the forefront of civilised progress. Yet, all too often, even we ourselves don't believe our public relations statements about nuclear power. Why is this? Let us approach the question by considering Godel's Theorem. Godel's Theorem is extremely complicated mathematically, but for our purposes can be simplified to the maxim that one cannot validate a system from within that system. Scientists, especially those in the fields of astronomy and nuclear physics, have long realised the implications of Godel's Theorem. The people to whom we must communicate look to us, who officially know everything about our industry, to comfort and reassure them. And we forget that we can only comfort them by addressing their emotional needs, not by demonstrating our chilling o bjectivity . Let us try something completely new in communication. Instead of looking for incremental rules which will help us marginally differentiate the way we communicate about minor or major incidents, let us leapfrog across 'objectivity' to meaning and relevance. If we truly believe that nuclear energy is a good thing, this leap should not be difficult. Finally, if we as communicators are not prepared to be meaningful and relevant - not prepared to leapfrog beyond weasel terms like 'minor incident' - what does that say about the kinds of people we believe the nuclear community to be? Are nuclear people a group apart, divisible from the rest of the human race by their evil? In fact the nuclear community is a living, laughing, normal part of a whole society; and is moreover a good contributor to the technological progress that society demands. When we ourselves recognise this, we will start to communicate nuclear issues in the same language as the rest of society. We will start to speak plainly and convincingly, and our conviction will leapfrog our audience into being able to believe us

  15. A general conservative extension theorem in process algebras with inequalities

    d' Argenio, P.R.; Verhoef, Chris

    1997-01-01

    We prove a general conservative extension theorem for transition system based process theories with easy-to-check and reasonable conditions. The core of this result is another general theorem which gives sufficient conditions for a system of operational rules and an extension of it in order to

  16. A note on the Pfaffian integration theorem

    Borodin, Alexei; Kanzieper, Eugene

    2007-01-01

    Two alternative, fairly compact proofs are presented of the Pfaffian integration theorem that surfaced in the recent studies of spectral properties of Ginibre's Orthogonal Ensemble. The first proof is based on a concept of the Fredholm Pfaffian; the second proof is purely linear algebraic. (fast track communication)

  17. On Callan's proof of the BPHZ theorem

    Lesniewski, A.

    1984-01-01

    The author gives an elementary proof of the BPHZ theorem in the case of the Euclidean lambdaphi 4 theory. The method of proof relies on a detailed analysis of the skeleton structure of graphs and estimates based on the Callan-Symanzik equations. (Auth.)

  18. Arthroscopic Labral Base Repair in the Hip: 5-Year Minimum Clinical Outcomes.

    Domb, Benjamin G; Yuen, Leslie C; Ortiz-Declet, Victor; Litrenta, Jody; Perets, Itay; Chen, Austin W

    2017-10-01

    Arthroscopic labral base repair (LBR) in the hip is a previously described technique designed to restore the native functional anatomy of the labrum by reproducing its seal against the femoral head. LBR has been shown to have good short-term outcomes. Hypothesis/Purpose: The purpose was to evaluate clinical outcomes of an LBR cohort with a minimum 5-year follow-up. It was hypothesized that patients who underwent LBR would continue to have significant improvement from their preoperative scores and maintain scores similar to their 2-year outcomes. Case series; Level of evidence, 4. Data for patients undergoing primary hip arthroscopic surgery with LBR from February 2008 to May 2011 with a minimum 5-year follow-up were prospectively collected and retrospectively reviewed. Patients with preoperative Tonnis osteoarthritis grade ≥2, previous hip conditions (slipped capital femoral epiphysis, avascular necrosis, Legg-Calv-Perthes disease), severe dysplasia (lateral center-edge angle hip surgery were excluded. Statistical equivalence tests evaluated patient-reported outcomes (PROs) including the modified Harris Hip Score (mHHS), Non-Arthritic Hip Score (NAHS), Hip Outcome Score-Sport-Specific Subscale (HOS-SSS), visual analog scale (VAS) for pain, and patient satisfaction (0-10 scale; 10 = very satisfied). Of the 70 patients (74 hips) who met inclusion and exclusion criteria, 60 (85.7%) patients (64 hips) were available at a minimum 5-year follow-up. All PRO scores significantly improved from preoperative values with a mean follow-up of 67.8 ± 7.4 months (range, 60.0-89.7 months). The mean mHHS increased from 64.4 ±13.8 to 85.3 ± 17.7 ( P hip arthroscopic surgery has yet to be determined; however, these midterm results demonstrate the rates of additional procedures (both secondary arthroscopic surgery and conversion to total hip arthroplasty), that may be necessary after 2 years.

  19. Bertrand's theorem and virial theorem in fractional classical mechanics

    Yu, Rui-Yan; Wang, Towe

    2017-09-01

    Fractional classical mechanics is the classical counterpart of fractional quantum mechanics. The central force problem in this theory is investigated. Bertrand's theorem is generalized, and virial theorem is revisited, both in three spatial dimensions. In order to produce stable, closed, non-circular orbits, the inverse-square law and the Hooke's law should be modified in fractional classical mechanics.

  20. Morley’s Trisector Theorem

    Coghetto Roland

    2015-06-01

    Full Text Available Morley’s trisector theorem states that “The points of intersection of the adjacent trisectors of the angles of any triangle are the vertices of an equilateral triangle” [10]. There are many proofs of Morley’s trisector theorem [12, 16, 9, 13, 8, 20, 3, 18]. We follow the proof given by A. Letac in [15].

  1. Standardization and Confluence in Pure Lambda-Calculus Formalized for the Matita Theorem Prover

    Ferruccio Guidi

    2012-01-01

    Full Text Available We present a formalization of pure lambda-calculus for the Matita interactive theorem prover, including the proofs of two relevant results in reduction theory: the confluence theorem and the standardization theorem. The proof of the latter is based on a new approach recently introduced by Xi and refined by Kashima that, avoiding the notion of development and having a neat inductive structure, is particularly suited for formalization in theorem provers.

  2. A Decomposition Theorem for Finite Automata.

    Santa Coloma, Teresa L.; Tucci, Ralph P.

    1990-01-01

    Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)

  3. Symmetric Informationally-Complete Quantum States as Analogues to Orthonormal Bases and Minimum-Uncertainty States

    D. Marcus Appleby

    2014-03-01

    Full Text Available Recently there has been much effort in the quantum information community to prove (or disprove the existence of symmetric informationally complete (SIC sets of quantum states in arbitrary finite dimension. This paper strengthens the urgency of this question by showing that if SIC-sets exist: (1 by a natural measure of orthonormality, they are as close to being an orthonormal basis for the space of density operators as possible; and (2 in prime dimensions, the standard construction for complete sets of mutually unbiased bases and Weyl-Heisenberg covariant SIC-sets are intimately related: The latter represent minimum uncertainty states for the former in the sense of Wootters and Sussman. Finally, we contribute to the question of existence by conjecturing a quadratic redundancy in the equations for Weyl-Heisenberg SIC-sets.

  4. Minimum Probability of Error-Based Equalization Algorithms for Fading Channels

    Janos Levendovszky

    2007-06-01

    Full Text Available Novel channel equalizer algorithms are introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithms are based on newly derived bounds on the probability of error (PE and guarantee better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE algorithms. The new equalization methods require channel state information which is obtained by a fast adaptive channel identification algorithm. As a result, the combined convergence time needed for channel identification and PE minimization still remains smaller than the convergence time of traditional adaptive algorithms, yielding real-time equalization. The performance of the new algorithms is tested by extensive simulations on standard mobile channels.

  5. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  6. Validation of a model-based measurement of the minimum insert thickness of knee prostheses: a retrieval study.

    van IJsseldijk, E A; Harman, M K; Luetzner, J; Valstar, E R; Stoel, B C; Nelissen, R G H H; Kaptein, B L

    2014-10-01

    Wear of polyethylene inserts plays an important role in failure of total knee replacement and can be monitored in vivo by measuring the minimum joint space width in anteroposterior radiographs. The objective of this retrospective cross-sectional study was to compare the accuracy and precision of a new model-based method with the conventional method by analysing the difference between the minimum joint space width measurements and the actual thickness of retrieved polyethylene tibial inserts. Before revision, the minimum joint space width values and their locations on the insert were measured in 15 fully weight-bearing radiographs. These measurements were compared with the actual minimum thickness values and locations of the retrieved tibial inserts after revision. The mean error in the model-based minimum joint space width measurement was significantly smaller than the conventional method for medial condyles (0.50 vs 0.94 mm, p model-based measurements was less than 10 mm in the medial direction in 12 cases and less in the lateral direction in 13 cases. The model-based minimum joint space width measurement method is more accurate than the conventional measurement with the same precision. Cite this article: Bone Joint Res 2014;3:289-96. ©2014 The British Editorial Society of Bone & Joint Surgery.

  7. Pauli and The Spin-Statistics Theorem

    Duck, Ian; Sudarshan, E.C.G.

    1998-03-01

    This book makes broadly accessible an understandable proof of the infamous spin-statistics theorem. This widely known but little-understood theorem is intended to explain the fact that electrons obey the Pauli exclusion principle. This fact, in turn, explains the periodic table of the elements and their chemical properties.Therefore, this one simply stated fact is responsible for many of the principal features of our universe, from chemistry to solid state physics to nuclear physics to the life cycle of stars.In spite of its fundamental importance, it is only a slight exaggeration to say that 'everyone knows the spin-statistics theorem, but no one understands it'. This book simplifies and clarifies the formal statements of the theorem, and also corrects the invariably flawed intuitive explanations which are frequently put forward. The book will be of interest to many practising physicists in all fields who have long been frustrated by the impenetrable discussions on the subject which have been available until now.It will also be accessible to students at an advanced undergraduate level as an introduction to modern physics based directly on the classical writings of the founders, including Pauli, Dirac, Heisenberg, Einstein and many others

  8. Linear-Array Photoacoustic Imaging Using Minimum Variance-Based Delay Multiply and Sum Adaptive Beamforming Algorithm

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2017-01-01

    In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...

  9. MVT a most valuable theorem

    Smorynski, Craig

    2017-01-01

    This book is about the rise and supposed fall of the mean value theorem. It discusses the evolution of the theorem and the concepts behind it, how the theorem relates to other fundamental results in calculus, and modern re-evaluations of its role in the standard calculus course. The mean value theorem is one of the central results of calculus. It was called “the fundamental theorem of the differential calculus” because of its power to provide simple and rigorous proofs of basic results encountered in a first-year course in calculus. In mathematical terms, the book is a thorough treatment of this theorem and some related results in the field; in historical terms, it is not a history of calculus or mathematics, but a case study in both. MVT: A Most Valuable Theorem is aimed at those who teach calculus, especially those setting out to do so for the first time. It is also accessible to anyone who has finished the first semester of the standard course in the subject and will be of interest to undergraduate mat...

  10. Strong versions of Bell's theorem

    Stapp, H.P.

    1994-01-01

    Technical aspects of a recently constructed strong version of Bell's theorem are discussed. The theorem assumes neither hidden variables nor factorization, and neither determinism nor counterfactual definiteness. It deals directly with logical connections. Hence its relationship with modal logic needs to be described. It is shown that the proof can be embedded in an orthodox modal logic, and hence its compatibility with modal logic assured, but that this embedding weakens the theorem by introducing as added assumptions the conventionalities of the particular modal logic that is adopted. This weakening is avoided in the recent proof by using directly the set-theoretic conditions entailed by the locality assumption

  11. Green's theorem and Gorenstein sequences

    Ahn, Jeaman; Migliore, Juan C.; Shin, Yong-Su

    2016-01-01

    We study consequences, for a standard graded algebra, of extremal behavior in Green's Hyperplane Restriction Theorem. First, we extend his Theorem 4 from the case of a plane curve to the case of a hypersurface in a linear space. Second, assuming a certain Lefschetz condition, we give a connection to extremal behavior in Macaulay's theorem. We apply these results to show that $(1,19,17,19,1)$ is not a Gorenstein sequence, and as a result we classify the sequences of the form $(1,a,a-2,a,1)$ th...

  12. -Dimensional Fractional Lagrange's Inversion Theorem

    F. A. Abd El-Salam

    2013-01-01

    Full Text Available Using Riemann-Liouville fractional differential operator, a fractional extension of the Lagrange inversion theorem and related formulas are developed. The required basic definitions, lemmas, and theorems in the fractional calculus are presented. A fractional form of Lagrange's expansion for one implicitly defined independent variable is obtained. Then, a fractional version of Lagrange's expansion in more than one unknown function is generalized. For extending the treatment in higher dimensions, some relevant vectors and tensors definitions and notations are presented. A fractional Taylor expansion of a function of -dimensional polyadics is derived. A fractional -dimensional Lagrange inversion theorem is proved.

  13. Complex integration and Cauchy's theorem

    Watson, GN

    2012-01-01

    This brief monograph by one of the great mathematicians of the early twentieth century offers a single-volume compilation of propositions employed in proofs of Cauchy's theorem. Developing an arithmetical basis that avoids geometrical intuitions, Watson also provides a brief account of the various applications of the theorem to the evaluation of definite integrals.Author G. N. Watson begins by reviewing various propositions of Poincaré's Analysis Situs, upon which proof of the theorem's most general form depends. Subsequent chapters examine the calculus of residues, calculus optimization, the

  14. Applications of square-related theorems

    Srinivasan, V. K.

    2014-04-01

    The square centre of a given square is the point of intersection of its two diagonals. When two squares of different side lengths share the same square centre, there are in general four diagonals that go through the same square centre. The Two Squares Theorem developed in this paper summarizes some nice theoretical conclusions that can be obtained when two squares of different side lengths share the same square centre. These results provide the theoretical basis for two of the constructions given in the book of H.S. Hall and F.H. Stevens , 'A Shorter School Geometry, Part 1, Metric Edition'. In page 134 of this book, the authors present, in exercise 4, a practical construction which leads to a verification of the Pythagorean theorem. Subsequently in Theorems 29 and 30, the authors present the standard proofs of the Pythagorean theorem and its converse. In page 140, the authors present, in exercise 15, what amounts to a geometric construction, whose verification involves a simple algebraic identity. Both the constructions are of great importance and can be replicated by using the standard equipment provided in a 'geometry toolbox' carried by students in high schools. The author hopes that the results proved in this paper, in conjunction with the two constructions from the above-mentioned book, would provide high school students an appreciation of the celebrated theorem of Pythagoras. The diagrams that accompany this document are based on the free software GeoGebra. The author formally acknowledges his indebtedness to the creators of this free software at the end of this document.

  15. Theorem of comparative sensitivity of fibre sensors

    Belovolov, M. I.; Paramonov, V. M.; Belovolov, M. M.

    2017-12-01

    We report an analysis of sensitivity of fibre sensors of physical quantities based on different types of interferometers. We formulate and prove the following theorem: under the time-dependent external physical perturbations at nonzero frequencies (i.e., except the static and low-frequency ones) on the sensitive arms of an interferometer in the form of multiturn elements (coils), there exist such lengths L of the measuring arms of the fibre interferometers at which the sensitivity of sensors based on the Sagnac fibre interferometers can be comparable with the sensitivity of sensors based on Michelson, Mach - Zehnder, or Fabry - Perot fibre interferometers, as well as exceed it under similar other conditions (similar-type perturbations, similar arm lengths and single-mode fibre types). The consequences that follow from the theorem, important for practical implementation of arrays of fibre sensors for measurement purposes and the devices with stable metrological properties, are discussed.

  16. Analogy to Derive an Extended Pythagorean Theorem to ''N'' Dimensions

    Acosta-Robledo J.U.

    2012-01-01

    Full Text Available This article demonstrates that it is possible to extend the Pythagorean Theorem to ''N'' dimensions. This demonstration is mainly done based on linear algebra, especially in the vector product of ''N'' dimensions.

  17. Phase synchronization based minimum spanning trees for analysis of financial time series with nonlinear correlations

    Radhakrishnan, Srinivasan; Duvvuru, Arjun; Sultornsanee, Sivarit; Kamarthi, Sagar

    2016-02-01

    The cross correlation coefficient has been widely applied in financial time series analysis, in specific, for understanding chaotic behaviour in terms of stock price and index movements during crisis periods. To better understand time series correlation dynamics, the cross correlation matrices are represented as networks, in which a node stands for an individual time series and a link indicates cross correlation between a pair of nodes. These networks are converted into simpler trees using different schemes. In this context, Minimum Spanning Trees (MST) are the most favoured tree structures because of their ability to preserve all the nodes and thereby retain essential information imbued in the network. Although cross correlations underlying MSTs capture essential information, they do not faithfully capture dynamic behaviour embedded in the time series data of financial systems because cross correlation is a reliable measure only if the relationship between the time series is linear. To address the issue, this work investigates a new measure called phase synchronization (PS) for establishing correlations among different time series which relate to one another, linearly or nonlinearly. In this approach the strength of a link between a pair of time series (nodes) is determined by the level of phase synchronization between them. We compare the performance of phase synchronization based MST with cross correlation based MST along selected network measures across temporal frame that includes economically good and crisis periods. We observe agreement in the directionality of the results across these two methods. They show similar trends, upward or downward, when comparing selected network measures. Though both the methods give similar trends, the phase synchronization based MST is a more reliable representation of the dynamic behaviour of financial systems than the cross correlation based MST because of the former's ability to quantify nonlinear relationships among time

  18. Keller’s theorem revisited

    Ortiz, Guillermo P.; Mochán, W. Luis

    2018-02-01

    Keller’s theorem relates the components of the macroscopic dielectric response of a binary two-dimensional composite system with those of the reciprocal system obtained by interchanging its components. We present a derivation of the theorem that, unlike previous ones, does not employ the common assumption that the response function relates an irrotational to a solenoidal field and that is valid for dispersive and dissipative anisotropic systems. We show that the usual statement of Keller’s theorem in terms of the conductivity is strictly valid only at zero frequency and we obtain a new generalization for finite frequencies. We develop applications of the theorem to the study of the optical properties of systems such as superlattices, 2D isotropic and anisotropic metamaterials and random media, to test the accuracy of theories and computational schemes, and to increase the accuracy of approximate calculations.

  19. A non linear ergodic theorem and application to a theorem of A. Pazy

    Djafari Rouhani, B.

    1989-07-01

    We prove that if (y n )n≥1 is a sequence in a real Hilbert space H such that for every non negative integer m the sequence (parallelΣ l =0 m y i +l parallel) i≥1 is non increasing, then: s n = 1/n Σ i=1 n y i converges strongly in H to the element of minimum norm in the closed convex hull of the sequence (y n ) n≥1 . We deduce a direct proof of a result containing a theorem of A. Pazy. (author). 27 refs

  20. Adiabatic theorem and spectral concentration

    Nenciu, G.

    1981-01-01

    The spectral concentration of arbitrary order, for the Stark effect is proved to exist for a large class of Hamiltonians appearing in nonrelativistic and relativistic quantum mechanics. The results are consequences of an abstract theorem about the spectral concentration for self-ad oint operators. A general form of the adiabatic theorem of quantum mechanics, generalizing an earlier result of the author as well as some results of Lenard, is also proved [ru

  1. Markov's theorem and algorithmically non-recognizable combinatorial manifolds

    Shtan'ko, M A

    2004-01-01

    We prove the theorem of Markov on the existence of an algorithmically non-recognizable combinatorial n-dimensional manifold for every n≥4. We construct for the first time a concrete manifold which is algorithmically non-recognizable. A strengthened form of Markov's theorem is proved using the combinatorial methods of regular neighbourhoods and handle theory. The proofs coincide for all n≥4. We use Borisov's group with insoluble word problem. It has two generators and twelve relations. The use of this group forms the base for proving the strengthened form of Markov's theorem

  2. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    Xu, H; Guerrero, M; Prado, K; Yi, B

    2016-01-01

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  3. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    Xu, H; Guerrero, M; Prado, K; Yi, B [University of Maryland School of Medicine, Baltimore, MD (United States)

    2016-06-15

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  4. The Second Noether Theorem on Time Scales

    Agnieszka B. Malinowska

    2013-01-01

    Full Text Available We extend the second Noether theorem to variational problems on time scales. As corollaries we obtain the classical second Noether theorem, the second Noether theorem for the h-calculus and the second Noether theorem for the q-calculus.

  5. Factor and Remainder Theorems: An Appreciation

    Weiss, Michael

    2016-01-01

    The high school curriculum sometimes seems like a disconnected collection of topics and techniques. Theorems like the factor theorem and the remainder theorem can play an important role as a conceptual "glue" that holds the curriculum together. These two theorems establish the connection between the factors of a polynomial, the solutions…

  6. MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T

    2016-11-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.

  7. A ROBUST CLUSTER HEAD SELECTION BASED ON NEIGHBORHOOD CONTRIBUTION AND AVERAGE MINIMUM POWER FOR MANETs

    S.Balaji

    2015-06-01

    Full Text Available Mobile Adhoc network is an instantaneous wireless network that is dynamic in nature. It supports single hop and multihop communication. In this infrastructure less network, clustering is a significant model to maintain the topology of the network. The clustering process includes different phases like cluster formation, cluster head selection, cluster maintenance. Choosing cluster head is important as the stability of the network depends on well-organized and resourceful cluster head. When the node has increased number of neighbors it can act as a link between the neighbor nodes which in further reduces the number of hops in multihop communication. Promisingly the node with more number of neighbors should also be available with enough energy to provide stability in the network. Hence these aspects demand the focus. In weight based cluster head selection, closeness and average minimum power required is considered for purging the ineligible nodes. The optimal set of nodes selected after purging will compete to become cluster head. The node with maximum weight selected as cluster head. Mathematical formulation is developed to show the proposed method provides optimum result. It is also suggested that weight factor in calculating the node weight should give precise importance to energy and node stability.

  8. Patch-based image segmentation of satellite imagery using minimum spanning tree construction

    Skurikhin, Alexei N [Los Alamos National Laboratory

    2010-01-01

    We present a method for hierarchical image segmentation and feature extraction. This method builds upon the combination of the detection of image spectral discontinuities using Canny edge detection and the image Laplacian, followed by the construction of a hierarchy of segmented images of successively reduced levels of details. These images are represented as sets of polygonized pixel patches (polygons) attributed with spectral and structural characteristics. This hierarchy forms the basis for object-oriented image analysis. To build fine level-of-detail representation of the original image, seed partitions (polygons) are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of the detected spectral discontinuities that form a network of constraints for the Delaunay triangulation. A polygonized image is represented as a spatial network in the form of a graph with vertices which correspond to the polygonal partitions and graph edges reflecting pairwise partitions relations. Image graph partitioning is based on the iterative graph oontraction using Boruvka's Minimum Spanning Tree algorithm. An important characteristic of the approach is that the agglomeration of partitions is constrained by the detected spectral discontinuities; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects.

  9. A Geometrical Approach to Bell's Theorem

    Rubincam, David Parry

    2000-01-01

    Bell's theorem can be proved through simple geometrical reasoning, without the need for the Psi function, probability distributions, or calculus. The proof is based on N. David Mermin's explication of the Einstein-Podolsky-Rosen-Bohm experiment, which involves Stern-Gerlach detectors which flash red or green lights when detecting spin-up or spin-down. The statistics of local hidden variable theories for this experiment can be arranged in colored strips from which simple inequalities can be deduced. These inequalities lead to a demonstration of Bell's theorem. Moreover, all local hidden variable theories can be graphed in such a way as to enclose their statistics in a pyramid, with the quantum-mechanical result lying a finite distance beneath the base of the pyramid.

  10. A no-go theorem for a two-dimensional self-correcting quantum memory based on stabilizer codes

    Bravyi, Sergey; Terhal, Barbara

    2009-01-01

    We study properties of stabilizer codes that permit a local description on a regular D-dimensional lattice. Specifically, we assume that the stabilizer group of a code (the gauge group for subsystem codes) can be generated by local Pauli operators such that the support of any generator is bounded by a hypercube of size O(1). Our first result concerns the optimal scaling of the distance d with the linear size of the lattice L. We prove an upper bound d=O(L D-1 ) which is tight for D=1, 2. This bound applies to both subspace and subsystem stabilizer codes. Secondly, we analyze the suitability of stabilizer codes for building a self-correcting quantum memory. Any stabilizer code with geometrically local generators can be naturally transformed to a local Hamiltonian penalizing states that violate the stabilizer condition. A degenerate ground state of this Hamiltonian corresponds to the logical subspace of the code. We prove that for D=1, 2, different logical states can be mapped into each other by a sequence of single-qubit Pauli errors such that the energy of all intermediate states is upper bounded by a constant independent of the lattice size L. The same result holds if there are unused logical qubits that are treated as 'gauge qubits'. It demonstrates that a self-correcting quantum memory cannot be built using stabilizer codes in dimensions D=1, 2. This result is in sharp contrast with the existence of a classical self-correcting memory in the form of a two-dimensional (2D) ferromagnet. Our results leave open the possibility for a self-correcting quantum memory based on 2D subsystem codes or on 3D subspace or subsystem codes.

  11. Minimum cost solution of wind–photovoltaic based stand-alone power systems for remote consumers

    Kaldellis, J.K.; Zafirakis, D.; Kavadias, K.

    2012-01-01

    Renewable energy sources (RES) based stand-alone systems employing either wind or solar power and energy storage comprise a reliable energy alternative, on top of conventional diesel-electric generator sets, commonly used by remote consumers. However, such systems usually imply the need for oversizing and considerable energy storage requirements leading to relatively high costs. On the other hand, hybrid configurations that may exploit both wind and solar potential of a given area may considerably reduce energy storage capacity and improve the economic performance of the system. In this context, an integrated techno-economic methodology for the evaluation of hybrid wind–photovoltaic stand-alone power systems is currently developed, aiming at the designation of optimum configurations for a typical remote consumer, using economic performance criteria. For the problem investigation, the developed evaluation model is applied to four representative areas of the Greek territory with different wind potential characteristics in order to obtain optimum configurations on the basis of minimum initial investment, 10-year and 20-year total cost. According to the results obtained, the proposed solution is favorably compared with all other stand-alone energy alternatives, reflecting the ability of hybrid systems to adjust even in areas where the local RES potential is not necessarily of high quality. - Highlights: ► Wind- and PV-stand alone systems often imply use of extreme battery capacity. ► Hybrid wind–PV systems may reduce energy storage requirements and associated costs. ► An optimization methodology is developed, based on economic performance criteria. ► Methodology is applied to four Greek regions of different wind potential. ► Results obtained reflect the hybrid solution's advantages over other alternatives.

  12. A THEOREM ON CENTRAL VELOCITY DISPERSIONS

    An, Jin H.; Evans, N. Wyn

    2009-01-01

    It is shown that, if the tracer population is supported by a spherical dark halo with a core or a cusp diverging more slowly than that of a singular isothermal sphere (SIS), the logarithmic cusp slope γ of the tracers must be given exactly by γ = 2β, where β is their velocity anisotropy parameter at the center unless the same tracers are dynamically cold at the center. If the halo cusp diverges faster than that of the SIS, the velocity dispersion of the tracers must diverge at the center too. In particular, if the logarithmic halo cusp slope is larger than two, the diverging velocity dispersion also traces the behavior of the potential. The implication of our theorem on projected quantities is also discussed. We argue that our theorem should be understood as a warning against interpreting results based on simplifying assumptions such as isotropy and spherical symmetry.

  13. Proofs and generalizations of the pythagorean theorem

    Lialda B. Cavalcanti

    2011-01-01

    Full Text Available This article explores a topic developed by a group of researchers of the Science and Technology Teaching School of Instituto Federal de Pernambuco, Brazil (IFPE, in assistance to the development of the Mathematics Practical and Teaching Laboratory of the distance learning Teaching Licensure, financed by the Universidad Abierta de Brasil. In this article, we describe the peculiarities present in the proofs of the Pythagorean theorem with the purpose of illustrating some of these methods. The selection of these peculiarities was founded and based on the comparison of areas by means of the superimposition of geometrical shapes and used several different class resources. Some generalizations of this important theorem in mathematical problem-solving are also shown.

  14. Determining the global minimum of Higgs potentials via Groebner bases - applied to the NMSSM

    Maniatis, M.; Manteuffel, A. von; Nachtmann, O.

    2007-01-01

    Determining the global minimum of Higgs potentials with several Higgs fields like the next-to-minimal supersymmetric extension of the standard model (NMSSM) is a non-trivial task already at the tree level. The global minimum of a Higgs potential can be found from the set of all its stationary points defined by a multivariate polynomial system of equations. We introduce here the algebraic Groebner basis approach to solve this system of equations. We apply the method to the NMSSM with CP-conserving as well as CP-violating parameters. The results reveal an interesting stationary-point structure of the potential. Requiring the global minimum to give the electroweak symmetry breaking observed in Nature excludes large parts of the parameter space. (orig.)

  15. Determining the global minimum of Higgs potentials via Groebner bases - applied to the NMSSM

    Maniatis, M.; Manteuffel, A. von; Nachtmann, O. [Institut fuer Theoretische Physik, Heidelberg (Germany)

    2007-03-15

    Determining the global minimum of Higgs potentials with several Higgs fields like the next-to-minimal supersymmetric extension of the standard model (NMSSM) is a non-trivial task already at the tree level. The global minimum of a Higgs potential can be found from the set of all its stationary points defined by a multivariate polynomial system of equations. We introduce here the algebraic Groebner basis approach to solve this system of equations. We apply the method to the NMSSM with CP-conserving as well as CP-violating parameters. The results reveal an interesting stationary-point structure of the potential. Requiring the global minimum to give the electroweak symmetry breaking observed in Nature excludes large parts of the parameter space. (orig.)

  16. A Minimum Fuel Based Estimator for Maneuver and Natrual Dynamics Reconstruction

    Lubey, D.; Scheeres, D.

    2013-09-01

    The vast and growing population of objects in Earth orbit (active and defunct spacecraft, orbital debris, etc.) offers many unique challenges when it comes to tracking these objects and associating the resulting observations. Complicating these challenges are the inaccurate natural dynamical models of these objects, the active maneuvers of spacecraft that deviate them from their ballistic trajectories, and the fact that spacecraft are tracked and operated by separate agencies. Maneuver detection and reconstruction algorithms can help with each of these issues by estimating mismodeled and unmodeled dynamics through indirect observation of spacecraft. It also helps to verify the associations made by an object correlation algorithm or aid in making those associations, which is essential when tracking objects in orbit. The algorithm developed in this study applies an Optimal Control Problem (OCP) Distance Metric approach to the problems of Maneuver Reconstruction and Dynamics Estimation. This was first developed by Holzinger, Scheeres, and Alfriend (2011), with a subsequent study by Singh, Horwood, and Poore (2012). This method estimates the minimum fuel control policy rather than the state as a typical Kalman Filter would. This difference ensures that the states are connected through a given dynamical model and allows for automatic covariance manipulation, which can help to prevent filter saturation. Using a string of measurements (either verified or hypothesized to correlate with one another), the algorithm outputs a corresponding string of adjoint and state estimates with associated noise. Post-processing techniques are implemented, which when applied to the adjoint estimates can remove noise and expose unmodeled maneuvers and mismodeled natural dynamics. Specifically, the estimated controls are used to determine spacecraft dependent accelerations (atmospheric drag and solar radiation pressure) using an adapted form of the Optimal Control based natural dynamics

  17. Generalized Optical Theorem Detection in Random and Complex Media

    Tu, Jing

    The problem of detecting changes of a medium or environment based on active, transmit-plus-receive wave sensor data is at the heart of many important applications including radar, surveillance, remote sensing, nondestructive testing, and cancer detection. This is a challenging problem because both the change or target and the surrounding background medium are in general unknown and can be quite complex. This Ph.D. dissertation presents a new wave physics-based approach for the detection of targets or changes in rather arbitrary backgrounds. The proposed methodology is rooted on a fundamental result of wave theory called the optical theorem, which gives real physical energy meaning to the statistics used for detection. This dissertation is composed of two main parts. The first part significantly expands the theory and understanding of the optical theorem for arbitrary probing fields and arbitrary media including nonreciprocal media, active media, as well as time-varying and nonlinear scatterers. The proposed formalism addresses both scalar and full vector electromagnetic fields. The second contribution of this dissertation is the application of the optical theorem to change detection with particular emphasis on random, complex, and active media, including single frequency probing fields and broadband probing fields. The first part of this work focuses on the generalization of the existing theoretical repertoire and interpretation of the scalar and electromagnetic optical theorem. Several fundamental generalizations of the optical theorem are developed. A new theory is developed for the optical theorem for scalar fields in nonhomogeneous media which can be bounded or unbounded. The bounded media context is essential for applications such as intrusion detection and surveillance in enclosed environments such as indoor facilities, caves, tunnels, as well as for nondestructive testing and communication systems based on wave-guiding structures. The developed scalar

  18. Hypnosis control based on the minimum concentration of anesthetic drug for maintaining appropriate hypnosis.

    Furutani, Eiko; Nishigaki, Yuki; Kanda, Chiaki; Takeda, Toshihiro; Shirakami, Gotaro

    2013-01-01

    This paper proposes a novel hypnosis control method using Auditory Evoked Potential Index (aepEX) as a hypnosis index. In order to avoid side effects of an anesthetic drug, it is desirable to reduce the amount of an anesthetic drug during surgery. For this purpose many studies of hypnosis control systems have been done. Most of them use Bispectral Index (BIS), another hypnosis index, but it has problems of dependence on anesthetic drugs and nonsmooth change near some particular values. On the other hand, aepEX has an ability of clear distinction between patient consciousness and unconsciousness and independence of anesthetic drugs. The control method proposed in this paper consists of two elements: estimating the minimum effect-site concentration for maintaining appropriate hypnosis and adjusting infusion rate of an anesthetic drug, propofol, using model predictive control. The minimum effect-site concentration is estimated utilizing the property of aepEX pharmacodynamics. The infusion rate of propofol is adjusted so that effect-site concentration of propofol may be kept near and always above the minimum effect-site concentration. Simulation results of hypnosis control using the proposed method show that the minimum concentration can be estimated appropriately and that the proposed control method can maintain hypnosis adequately and reduce the total infusion amount of propofol.

  19. An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA

    Jiye HUANG

    2014-05-01

    Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  20. Joint probability distributions and fluctuation theorems

    García-García, Reinaldo; Kolton, Alejandro B; Domínguez, Daniel; Lecomte, Vivien

    2012-01-01

    We derive various exact results for Markovian systems that spontaneously relax to a non-equilibrium steady state by using joint probability distribution symmetries of different entropy production decompositions. The analytical approach is applied to diverse problems such as the description of the fluctuations induced by experimental errors, for unveiling symmetries of correlation functions appearing in fluctuation–dissipation relations recently generalized to non-equilibrium steady states, and also for mapping averages between different trajectory-based dynamical ensembles. Many known fluctuation theorems arise as special instances of our approach for particular twofold decompositions of the total entropy production. As a complement, we also briefly review and synthesize the variety of fluctuation theorems applying to stochastic dynamics of both continuous systems described by a Langevin dynamics and discrete systems obeying a Markov dynamics, emphasizing how these results emerge from distinct symmetries of the dynamical entropy of the trajectory followed by the system. For Langevin dynamics, we embed the 'dual dynamics' with a physical meaning, and for Markov systems we show how the fluctuation theorems translate into symmetries of modified evolution operators

  1. Preservation theorems on finite structures

    Hebert, M.

    1994-09-01

    This paper concerns classical Preservation results applied to finite structures. We consider binary relations for which a strong form of preservation theorem (called strong interpolation) exists in the usual case. This includes most classical cases: embeddings, extensions, homomorphisms into and onto, sandwiches, etc. We establish necessary and sufficient syntactic conditions for the preservation theorems for sentences and for theories to hold in the restricted context of finite structures. We deduce that for all relations above, the restricted theorem for theories hold provided the language is finite. For the sentences the restricted version fails in most cases; in fact the ''homomorphism into'' case seems to be the only possible one, but the efforts to show that have failed. We hope our results may help to solve this frustrating problem; in the meantime, they are used to put a lower bound on the level of complexity of potential counterexamples. (author). 8 refs

  2. Absolute determination of zero-energy phase shifts for multiparticle single-channel scattering: Generalized Levinson theorem

    Rosenberg, L.; Spruch, L.

    1996-01-01

    Levinson close-quote s theorem relates the zero-energy phase shift δ for potential scattering in a given partial wave l, by a spherically symmetric potential that falls off sufficiently rapidly, to the number of bound states of that l supported by the potential. An extension of this theorem is presented that applies to single-channel scattering by a compound system initially in its ground state. As suggested by Swan [Proc. R. Soc. London Ser. A 228, 10 (1955)], the extended theorem differs from that derived for potential scattering; even in the absence of composite bound states δ may differ from zero as a consequence of the Pauli principle. The derivation given here is based on the introduction of a continuous auxiliary open-quote open-quote length phase close-quote close-quote η, defined modulo π for l=0 by expressing the scattering length as A=acotη, where a is a characteristic length of the target. Application of the minimum principle for the scattering length determines the branch of the cotangent curve on which η lies and, by relating η to δ, an absolute determination of δ is made. The theorem is applicable, in principle, to single-channel scattering in any partial wave for e ± -atom and nucleon-nucleus systems. In addition to a knowledge of the number of composite bound states, information (which can be rather incomplete) concerning the structure of the target ground-state wave function is required for an explicit, absolute, determination of the phase shift δ. As for Levinson close-quote s original theorem for potential scattering, no additional information concerning the scattering wave function or scattering dynamics is required. copyright 1996 The American Physical Society

  3. Scale symmetry and virial theorem

    Westenholz, C. von

    1978-01-01

    Scale symmetry (or dilatation invariance) is discussed in terms of Noether's Theorem expressed in terms of a symmetry group action on phase space endowed with a symplectic structure. The conventional conceptual approach expressing invariance of some Hamiltonian under scale transformations is re-expressed in alternate form by infinitesimal automorphisms of the given symplectic structure. That is, the vector field representing scale transformations leaves the symplectic structure invariant. In this model, the conserved quantity or constant of motion related to scale symmetry is the virial. It is shown that the conventional virial theorem can be derived within this framework

  4. Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model

    Yang, Yuefang; Gan, Chunhui; Shen, Tingting

    2017-05-01

    In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.

  5. Performance Measurement Implementation Of Minimum Service Standards For Basic Education Based On The Balanced Scorecard

    Budiman Rusli

    2015-08-01

    Full Text Available Policies Minimum Service Standards for Basic Education has rolled out since 2002 by the minister in accordance with the Decree No. 129a U 2004 About Minimum Service Standards Education is continually updated and lastly Regulation of the Minister of Education and Culture No. 23 of 2013. All of the district government town should achieve the target of achieving 100 per cent in each of the indicators listed in the minimum service standards for the end of 2014. achievement pad on each indicator is just one measure of the performance of the local government department of education. Unfortunately from the announced target for 27 indicators that exist almost all regions including local governments do not reach Tangerang Regency. It is necessary for measuring the performance of local authorities particularly the education department. One performance measure modern enough that measurements can be done that The Balance Scorecard BSc. In the Balanced Scorecard is a management tool contemporare complete measure company performance not only of the financial perspective but also non-financial performance such as Customer Perspective Internal Business Processes and Learning and Growth. This approach is actually ideally suited for multinational companies because this approach requires very expensive but can be used to measure the profit performance of the company in addition to the combination of a long-term strategic and short-strategic. Balanced Scorecard it can also be done in measuring the performance of public sector services as well by modifying a few things so it can be used to measure the performance of the public sector including the Performance Measurement Minimum Service Standards for Basic Education.

  6. [Assessment on the ecological suitability in Zhuhai City, Guangdong, China, based on minimum cumulative resistance model].

    Li, Jian-fei; Li, Lin; Guo, Luo; Du, Shi-hong

    2016-01-01

    Urban landscape has the characteristics of spatial heterogeneity. Because the expansion process of urban constructive or ecological land has different resistance values, the land unit stimulates and promotes the expansion of ecological land with different intensity. To compare the effect of promoting and hindering functions in the same land unit, we firstly compared the minimum cumulative resistance value of promoting and hindering functions, and then looked for the balance of two landscape processes under the same standard. According to the ecology principle of minimum limit factor, taking the minimum cumulative resistance analysis method under two expansion processes as the evaluation method of urban land ecological suitability, this research took Zhuhai City as the study area to estimate urban ecological suitability by relative evaluation method with remote sensing image, field survey, and statistics data. With the support of ArcGIS, five types of indicators on landscape types, ecological value, soil erosion sensitivity, sensitivity of geological disasters, and ecological function were selected as input parameters in the minimum cumulative resistance model to compute urban ecological suitability. The results showed that the ecological suitability of the whole Zhuhai City was divided into five levels: constructive expansion prohibited zone (10.1%), constructive expansion restricted zone (32.9%), key construction zone (36.3%), priority development zone (2.3%), and basic cropland (18.4%). Ecological suitability of the central area of Zhuhai City was divided into four levels: constructive expansion prohibited zone (11.6%), constructive expansion restricted zone (25.6%), key construction zone (52.4%), priority development zone (10.4%). Finally, we put forward the sustainable development framework of Zhuhai City according to the research conclusion. On one hand, the government should strictly control the development of the urban center area. On the other hand, the

  7. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. Modeling monthly meteorological and agronomic frost days, based on minimum air temperature, in Center-Southern Brazil

    Alvares, Clayton Alcarde; Sentelhas, Paulo César; Stape, José Luiz

    2017-09-01

    Although Brazil is predominantly a tropical country, frosts are observed with relative high frequency in the Center-Southern states of the country, affecting mainly agriculture, forestry, and human activities. Therefore, information about the frost climatology is of high importance for planning of these activities. Based on that, the aims of the present study were to develop monthly meteorological (F MET) and agronomic (F AGR) frost day models, based on minimum shelter air temperature (T MN), in order to characterize the temporal and spatial frost days variability in Center-Southern Brazil. Daily minimum air temperature data from 244 weather stations distributed across the study area were used, being 195 for developing the models and 49 for validating them. Multivariate regression models were obtained to estimate the monthly T MN, once the frost day models were based on this variable. All T MN regression models were statistically significant (p Brazilian region are the first zoning of these variables for the country.

  9. Kolmogorov-Arnold-Moser Theorem

    system (not necessarily the 2-body system). Kolmogorov was the first to provide a solution to the above general problem in a theorem formulated in 1954 (see Suggested. Reading). However, he provided only an outline of the proof. The actual proof (with all the details) turned to be quite difficult and was provided by Arnold ...

  10. Opechowski's theorem and commutator groups

    Caride, A.O.; Zanette, S.I.

    1985-01-01

    It is shown that the conditions of application of Opechowski's theorem for double groups of subgroups of O(3) are directly associated to the structure of their commutator groups. Some characteristics of the structure of classes are also discussed. (Author) [pt

  11. Shell theorem for spontaneous emission

    Kristensen, Philip Trøst; Mortensen, Jakob Egeberg; Lodahl, Peter

    2013-01-01

    and therefore is given exactly by the dipole approximation theory. This surprising result is a spontaneous emission counterpart to the shell theorems of classical mechanics and electrostatics and provides insights into the physics of mesoscopic emitters as well as great simplifications in practical calculations....

  12. KLN theorem and infinite statistics

    Grandou, T.

    1992-01-01

    The possible extension of the Kinoshita-Lee-Nauenberg (KLN) theorem to the case of infinite statistics is examined. It is shown that it appears as a stable structure in a quantum field theory context. The extension is provided by working out the Fock space realization of a 'quantum algebra'. (author) 2 refs

  13. The Geometric Mean Value Theorem

    de Camargo, André Pierro

    2018-01-01

    In a previous article published in the "American Mathematical Monthly," Tucker ("Amer Math Monthly." 1997; 104(3): 231-240) made severe criticism on the Mean Value Theorem and, unfortunately, the majority of calculus textbooks also do not help to improve its reputation. The standard argument for proving it seems to be applying…

  14. Fermion fractionization and index theorem

    Hirayama, Minoru; Torii, Tatsuo

    1982-01-01

    The relation between the fermion fractionization and the Callias-Bott-Seeley index theorem for the Dirac operator in the open space of odd dimension is clarified. Only the case of one spatial dimension is discussed in detail. Sum rules for the expectation values of various quantities in fermion-fractionized configurations are derived. (author)

  15. The Completeness Theorem of Godel

    GENERAL I ARTICLE. The Completeness Theorem of Godel. 2. Henkin's Proof for First Order Logic. S M Srivastava is with the. Indian Statistical,. Institute, Calcutta. He received his PhD from the Indian Statistical. Institute in 1980. His research interests are in descriptive set theory. I Part 1. An Introduction to Math- ematical ...

  16. Angle Defect and Descartes' Theorem

    Scott, Paul

    2006-01-01

    Rene Descartes lived from 1596 to 1650. His contributions to geometry are still remembered today in the terminology "Descartes' plane". This paper discusses a simple theorem of Descartes, which enables students to easily determine the number of vertices of almost every polyhedron. (Contains 1 table and 2 figures.)

  17. Optical theorem and its history

    Newton, R.G.

    1978-01-01

    A translation is presented of a paper submitted to the symposium ''Concepts and methods in microscopic physics'' held at Washington University in 1974. A detailed description is given of the history of the optical theorem, its various formulations and derivations and its use in the scattering theory. (Z.J.)

  18. On the Fourier integral theorem

    Koekoek, J.

    1987-01-01

    Introduction. In traditional proofs of convergence of Fourier series and of the Fourier integraI theorem basic tools are the theory of Dirichlet integraIs and the Riemann-Lebesgue lemma. Recently CHERNOFF [I) and REoIlEFFER (2) gave new proofs of convergenceof Fourier series which make no use of the

  19. The Classical Version of Stokes' Theorem Revisited

    Markvorsen, Steen

    2005-01-01

    Using only fairly simple and elementary considerations - essentially from first year undergraduate mathematics - we prove that the classical Stokes' theorem for any given surface and vector field in $\\mathbb{R}^{3}$ follows from an application of Gauss' divergence theorem to a suitable modification...... of the vector field in a tubular shell around the given surface. The intuitive appeal of the divergence theorem is thus applied to bootstrap a corresponding intuition for Stokes' theorem. The two stated classical theorems are (like the fundamental theorem of calculus) nothing but shadows of the general version...... to above. Our proof that Stokes' theorem follows from Gauss' divergence theorem goes via a well known and often used exercise, which simply relates the concepts of divergence and curl on the local differential level. The rest of the paper uses only integration in $1$, $2$, and $3$ variables together...

  20. An extended characterisation theorem for quantum logics

    Sharma, C.S.; Mukherjee, M.K.

    1977-01-01

    Two theorems are proved. In the first properties of an important mapping from an orthocomplemented lattice to itself are studied. In the second the characterisation theorem of Zierler (Pacific J. Math.; 11:1151 (1961)) is extended to obtain a very useful theorem characterising orthomodular lattices. Since quantum logics are merely sigma-complete orthomodular lattices, the principal result is, for application in quantum physics, a characterisation theorem for quantum logics. (author)

  1. DYNAMIC PARAMETER ESTIMATION BASED ON MINIMUM CROSS-ENTROPY METHOD FOR COMBINING INFORMATION SOURCES

    Sečkárová, Vladimíra

    2015-01-01

    Roč. 24, č. 5 (2015), s. 181-188 ISSN 0204-9805. [XVI-th International Summer Conference on Probability and Statistics (ISCPS-2014). Pomorie, 21.6.-29.6.2014] R&D Projects: GA ČR GA13-13502S Grant - others:GA UK(CZ) SVV 260225/2015 Institutional support: RVO:67985556 Keywords : minimum cross- entropy principle * Kullback-Leibler divergence * dynamic diffusion estimation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2015/AS/seckarova-0445817.pdf

  2. A note on generalized Weyl's theorem

    Zguitti, H.

    2006-04-01

    We prove that if either T or T* has the single-valued extension property, then the spectral mapping theorem holds for B-Weyl spectrum. If, moreover T is isoloid, and generalized Weyl's theorem holds for T, then generalized Weyl's theorem holds for f(T) for every . An application is given for algebraically paranormal operators.

  3. A definability theorem for first order logic

    Butz, C.; Moerdijk, I.

    1997-01-01

    In this paper we will present a definability theorem for first order logic This theorem is very easy to state and its proof only uses elementary tools To explain the theorem let us first observe that if M is a model of a theory T in a language L then clearly any definable subset S M ie a subset S

  4. A new reliability measure based on specified minimum distances before the locations of random variables in a finite interval

    Todinov, M.T.

    2004-01-01

    A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number density which guarantees that the probability of clustering of two or more random variables in a finite interval remains below a maximum acceptable level. It is demonstrated that even for moderate number densities the probability of clustering is substantial and should not be neglected in reliability calculations. In the important special case where the random variables are failure times, models have been proposed for determining the upper bound of the hazard rate which guarantees a set of minimum failure-free operating intervals before the random failures, with a specified probability. A model has also been proposed for determining the upper bound of the hazard rate which guarantees a minimum availability target. Using the models proposed, a new strategy, models and reliability tools have been developed for setting quantitative reliability requirements which consist of determining the intersection of the hazard rate envelopes (hazard rate upper bounds) which deliver a minimum failure-free operating period before random failures, a risk of premature failure below a maximum acceptable level and a minimum required availability. It is demonstrated that setting reliability requirements solely based on an availability target does not necessarily mean a low risk of premature failure. Even at a high availability level, the probability of premature failure can be substantial. For industries characterised by a high cost of failure, the reliability requirements should involve a hazard rate envelope limiting the risk of failure below a maximum acceptable level

  5. Prediction of minimum UO2 particle size based on thermal stress initiated fracture model

    Corradini, M.

    1976-08-01

    An analytic study was employed to determine the minimum UO 2 particle size that could survive fragmentation induced by thermal stresses in a UO 2 -Na Fuel Coolant Interaction (FCI). A brittle fracture mechanics approach was the basis of the study whereby stress intensity factors K/sub I/ were compared to the fracture toughness K/sub IC/ to determine if the particle could fracture. Solid and liquid UO 2 droplets were considered each with two possible interface contact conditions; perfect wetting by the sodium or a finite heat transfer coefficient. The analysis indicated that particles below the range of 50 microns in radius could survive a UO 2 -Na fuel coolant interaction under the most severe temperature conditions without thermal stress fragmentation. Environmental conditions of the fuel-coolant interaction were varied to determine the effects upon K/sub I/ and possible fragmentation. The underlying assumptions of the analysis were investigated in light of the analytic results. It was concluded that the analytic study seemed to verify the experimental observations as to the range of the minimum particle size due to thermal stress fragmentation by FCI. However the method used when the results are viewed in light of the basic assumptions indicates that the analysis is crude at best, and can be viewed as only a rough order of magnitude analysis. The basic complexities in fracture mechanics make further investigation in this area interesting but not necessarily fruitful for the immediate future

  6. Tight closure and vanishing theorems

    Smith, K.E.

    2001-01-01

    Tight closure has become a thriving branch of commutative algebra since it was first introduced by Mel Hochster and Craig Huneke in 1986. Over the past few years, it has become increasingly clear that tight closure has deep connections with complex algebraic geometry as well, especially with those areas of algebraic geometry where vanishing theorems play a starring role. The purpose of these lectures is to introduce tight closure and to explain some of these connections with algebraic geometry. Tight closure is basically a technique for harnessing the power of the Frobenius map. The use of the Frobenius map to prove theorems about complex algebraic varieties is a familiar technique in algebraic geometry, so it should perhaps come as no surprise that tight closure is applicable to algebraic geometry. On the other hand, it seems that so far we are only seeing the tip of a large and very beautiful iceberg in terms of tight closure's interpretation and applications to algebraic geometry. Interestingly, although tight closure is a 'characteristic p' tool, many of the problems where tight closure has proved useful have also yielded to analytic (L2) techniques. Despite some striking parallels, there had been no specific result directly linking tight closure and L∼ techniques. Recently, however, the equivalence of an ideal central to the theory of tight closure was shown to be equivalent to a certain 'multiplier ideal' first defined using L2 methods. Presumably, deeper connections will continue to emerge. There are two main types of problems for which tight closure has been helpful: in identifying nice structure and in establishing uniform behavior. The original algebraic applications of tight closure include, for example, a quick proof of the Hochster-Roberts theorem on the Cohen-Macaulayness of rings of invariants, and also a refined version of the Brianqon-Skoda theorem on the uniform behaviour of integral closures of powers of ideals. More recent, geometric

  7. Paraconsistent Probabilities: Consistency, Contradictions and Bayes’ Theorem

    Juliana Bueno-Soler

    2016-09-01

    Full Text Available This paper represents the first steps towards constructing a paraconsistent theory of probability based on the Logics of Formal Inconsistency (LFIs. We show that LFIs encode very naturally an extension of the notion of probability able to express sophisticated probabilistic reasoning under contradictions employing appropriate notions of conditional probability and paraconsistent updating, via a version of Bayes’ theorem for conditionalization. We argue that the dissimilarity between the notions of inconsistency and contradiction, one of the pillars of LFIs, plays a central role in our extended notion of probability. Some critical historical and conceptual points about probability theory are also reviewed.

  8. Fixed point theorems in locally convex spaces—the Schauder mapping method

    S. Cobzaş

    2006-03-01

    Full Text Available In the appendix to the book by F. F. Bonsal, Lectures on Some Fixed Point Theorems of Functional Analysis (Tata Institute, Bombay, 1962 a proof by Singbal of the Schauder-Tychonoff fixed point theorem, based on a locally convex variant of Schauder mapping method, is included. The aim of this note is to show that this method can be adapted to yield a proof of Kakutani fixed point theorem in the locally convex case. For the sake of completeness we include also the proof of Schauder-Tychonoff theorem based on this method. As applications, one proves a theorem of von Neumann and a minimax result in game theory.

  9. The de Finetti theorem for test spaces

    Barrett, Jonathan; Leifer, Matthew

    2009-01-01

    We prove a de Finetti theorem for exchangeable sequences of states on test spaces, where a test space is a generalization of the sample space of classical probability theory and the Hilbert space of quantum theory. The standard classical and quantum de Finetti theorems are obtained as special cases. By working in a test space framework, the common features that are responsible for the existence of these theorems are elucidated. In addition, the test space framework is general enough to imply a de Finetti theorem for classical processes. We conclude by discussing the ways in which our assumptions may fail, leading to probabilistic models that do not have a de Finetti theorem.

  10. Stochastic thermodynamics, fluctuation theorems and molecular machines

    Seifert, Udo

    2012-01-01

    Stochastic thermodynamics as reviewed here systematically provides a framework for extending the notions of classical thermodynamics such as work, heat and entropy production to the level of individual trajectories of well-defined non-equilibrium ensembles. It applies whenever a non-equilibrium process is still coupled to one (or several) heat bath(s) of constant temperature. Paradigmatic systems are single colloidal particles in time-dependent laser traps, polymers in external flow, enzymes and molecular motors in single molecule assays, small biochemical networks and thermoelectric devices involving single electron transport. For such systems, a first-law like energy balance can be identified along fluctuating trajectories. For a basic Markovian dynamics implemented either on the continuum level with Langevin equations or on a discrete set of states as a master equation, thermodynamic consistency imposes a local-detailed balance constraint on noise and rates, respectively. Various integral and detailed fluctuation theorems, which are derived here in a unifying approach from one master theorem, constrain the probability distributions for work, heat and entropy production depending on the nature of the system and the choice of non-equilibrium conditions. For non-equilibrium steady states, particularly strong results hold like a generalized fluctuation–dissipation theorem involving entropy production. Ramifications and applications of these concepts include optimal driving between specified states in finite time, the role of measurement-based feedback processes and the relation between dissipation and irreversibility. Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production. (review article)

  11. A Randomized Central Limit Theorem

    Eliazar, Iddo; Klafter, Joseph

    2010-01-01

    The Central Limit Theorem (CLT), one of the most elemental pillars of Probability Theory and Statistical Physics, asserts that: the universal probability law of large aggregates of independent and identically distributed random summands with zero mean and finite variance, scaled by the square root of the aggregate-size (√(n)), is Gaussian. The scaling scheme of the CLT is deterministic and uniform - scaling all aggregate-summands by the common and deterministic factor √(n). This Letter considers scaling schemes which are stochastic and non-uniform, and presents a 'Randomized Central Limit Theorem' (RCLT): we establish a class of random scaling schemes which yields universal probability laws of large aggregates of independent and identically distributed random summands. The RCLT universal probability laws, in turn, are the one-sided and the symmetric Levy laws.

  12. Bell's theorem, accountability and nonlocality

    Vona, Nicola; Liang, Yeong-Cherng

    2014-01-01

    Bell's theorem is a fundamental theorem in physics concerning the incompatibility between some correlations predicted by quantum theory and a large class of physical theories. In this paper, we introduce the hypothesis of accountability, which demands that it is possible to explain the correlations of the data collected in many runs of a Bell experiment in terms of what happens in each single run. Under this assumption, and making use of a recent result by Colbeck and Renner (2011 Nature Commun. 2 411), we then show that any nontrivial account of these correlations in the form of an extension of quantum theory must violate parameter independence. Moreover, we analyze the violation of outcome independence of quantum mechanics and show that it is also a manifestation of nonlocality. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘50 years of Bell's theorem’. (paper)

  13. Fluctuation theorems and atypical trajectories

    Sahoo, M; Lahiri, S; Jayannavar, A M

    2011-01-01

    In this work, we have studied simple models that can be solved analytically to illustrate various fluctuation theorems. These fluctuation theorems provide symmetries individually to the distributions of physical quantities such as the classical work (W c ), thermodynamic work (W), total entropy (Δs tot ) and dissipated heat (Q), when the system is driven arbitrarily out of equilibrium. All these quantities can be defined for individual trajectories. We have studied the number of trajectories which exhibit behaviour unexpected at the macroscopic level. As the time of observation increases, the fraction of such atypical trajectories decreases, as expected at the macroscale. The distributions for the thermodynamic work and entropy production in nonlinear models may exhibit a peak (most probable value) in the atypical regime without violating the expected average behaviour. However, dissipated heat and classical work exhibit a peak in the regime of typical behaviour only.

  14. Lectures on Fermat's last theorem

    Sury, B.

    1993-09-01

    The report presents the main ideas involved in the approach towards the so-called Fermat's last theorem (FLT). The discussion leads to the point where recent work of A. Wiles starts and his work is not discussed. After a short history of the FLT and of the present approach, are discussed the elliptic curves and the modular forms with their relations, the Taniyama-Shimura-Well conjecture and the FLT

  15. Pythagoras Theorem and Relativistic Kinematics

    Mulaj, Zenun; Dhoqina, Polikron

    2010-01-01

    In two inertial frames that move in a particular direction, may be registered a light signal that propagates in an angle with this direction. Applying Pythagoras theorem and principles of STR in both systems, we can derive all relativistic kinematics relations like the relativity of simultaneity of events, of the time interval, of the length of objects, of the velocity of the material point, Lorentz transformations, Doppler effect and stellar aberration.

  16. Notes on the area theorem

    Park, Mu-In

    2008-01-01

    Hawking's area theorem can be understood from a quasi-stationary process in which a black hole accretes positive energy matter, independent of the details of the gravity action. I use this process to study the dynamics of the inner as well as the outer horizons for various black holes which include the recently discovered exotic black holes and three-dimensional black holes in higher derivative gravities as well as the usual BTZ black hole and the Kerr black hole in four dimensions. I find that the area for the inner horizon 'can decrease', rather than increase, with the quasi-stationary process. However, I find that the area for the outer horizon 'never decreases' such that the usual area theorem still works in our examples, though this is quite non-trivial in general. There exists an instability problem of the inner horizons but it seems that the instability is not important in my analysis. I also find a generalized area theorem by combining those of the outer and inner horizons

  17. A Fascinating Application of Steiner's Theorem for Trapezium: Geometric Constructions Using Straightedge Alone

    Stupel, Moshe; Ben-Chaim, David

    2013-01-01

    Based on Steiner's fascinating theorem for trapezium, seven geometrical constructions using straight-edge alone are described. These constructions provide an excellent base for teaching theorems and the properties of geometrical shapes, as well as challenging thought and inspiring deeper insight into the world of geometry. In particular, this…

  18. A Borsuk-Ulam type generalization of the Leray-Schauder fixed point theorem

    Prykarpatsky, A.K.

    2007-05-01

    A generalization of the classical Leray-Schauder fixed point theorem, based on the infinite-dimensional Borsuk-Ulam type antipode construction, is proposed. Two completely different proofs based on the projection operator approach and on a weak version of the well known Krein-Milman theorem are presented. (author)

  19. Reasoning by analogy as an aid to heuristic theorem proving.

    Kling, R. E.

    1972-01-01

    When heuristic problem-solving programs are faced with large data bases that contain numbers of facts far in excess of those needed to solve any particular problem, their performance rapidly deteriorates. In this paper, the correspondence between a new unsolved problem and a previously solved analogous problem is computed and invoked to tailor large data bases to manageable sizes. This paper outlines the design of an algorithm for generating and exploiting analogies between theorems posed to a resolution-logic system. These algorithms are believed to be the first computationally feasible development of reasoning by analogy to be applied to heuristic theorem proving.

  20. Extracting Vegetation Coverage in Dry-hot Valley Regions Based on Alternating Angle Minimum Algorithm

    Y Yang, M.; Wang, J.; Zhang, Q.

    2017-07-01

    Vegetation coverage is one of the most important indicators for ecological environment change, and is also an effective index for the assessment of land degradation and desertification. The dry-hot valley regions have sparse surface vegetation, and the spectral information about the vegetation in such regions usually has a weak representation in remote sensing, so there are considerable limitations for applying the commonly-used vegetation index method to calculate the vegetation coverage in the dry-hot valley regions. Therefore, in this paper, Alternating Angle Minimum (AAM) algorithm of deterministic model is adopted for selective endmember for pixel unmixing of MODIS image in order to extract the vegetation coverage, and accuracy test is carried out by the use of the Landsat TM image over the same period. As shown by the results, in the dry-hot valley regions with sparse vegetation, AAM model has a high unmixing accuracy, and the extracted vegetation coverage is close to the actual situation, so it is promising to apply the AAM model to the extraction of vegetation coverage in the dry-hot valley regions.

  1. Identifying the optimal HVOF spray parameters to attain minimum porosity and maximum hardness in iron based amorphous metallic coatings

    S. Vignesh

    2017-04-01

    Full Text Available Flow based Erosion – corrosion problems are very common in fluid handling equipments such as propellers, impellers, pumps in warships, submarine. Though there are many coating materials available to combat erosion–corrosion damage in the above components, iron based amorphous coatings are considered to be more effective to combat erosion–corrosion problems. High velocity oxy-fuel (HVOF spray process is considered to be a better process to coat the iron based amorphous powders. In this investigation, iron based amorphous metallic coating was developed on 316 stainless steel substrate using HVOF spray technique. Empirical relationships were developed to predict the porosity and micro hardness of iron based amorphous coating incorporating HVOF spray parameters such as oxygen flow rate, fuel flow rate, powder feed rate, carrier gas flow rate, and spray distance. Response surface methodology (RSM was used to identify the optimal HVOF spray parameters to attain coating with minimum porosity and maximum hardness.

  2. Formalization of the Integral Calculus in the PVS Theorem Prover

    Ricky Wayne Butler

    2009-04-01

    Full Text Available The PVS Theorem prover is a widely used formal verification tool used for the analysis of safetycritical systems. The PVS prover, though fully equipped to support deduction in a very general logic framework, namely higher-order logic, it must nevertheless, be augmented with the definitions and associated theorems for every branch of mathematics and Computer Science that is used in a verification. This is a formidable task, ultimately requiring the contributions of researchers and developers all over the world. This paper reports on the formalization of the integral calculus in the PVS theorem prover. All of the basic definitions and theorems covered in a first course on integral calculus have been completed.The theory and proofs were based on Rosenlicht’s classic text on real analysis and follow the traditional epsilon-delta method. The goal of this work was to provide a practical set of PVS theories that could be used for verification of hybrid systems that arise in air traffic management systems and other aerospace applications. All of the basic linearity, integrability, boundedness, and continuity properties of the integral calculus were proved. The work culminated in the proof of the Fundamental Theorem Of Calculus. There is a brief discussion about why mechanically checked proofs are so much longer than standard mathematics textbook proofs.

  3. Formalization of the Integral Calculus in the PVS Theorem Prover

    Butler, Ricky W.

    2004-01-01

    The PVS Theorem prover is a widely used formal verification tool used for the analysis of safety-critical systems. The PVS prover, though fully equipped to support deduction in a very general logic framework, namely higher-order logic, it must nevertheless, be augmented with the definitions and associated theorems for every branch of mathematics and Computer Science that is used in a verification. This is a formidable task, ultimately requiring the contributions of researchers and developers all over the world. This paper reports on the formalization of the integral calculus in the PVS theorem prover. All of the basic definitions and theorems covered in a first course on integral calculus have been completed.The theory and proofs were based on Rosenlicht's classic text on real analysis and follow the traditional epsilon-delta method. The goal of this work was to provide a practical set of PVS theories that could be used for verification of hybrid systems that arise in air traffic management systems and other aerospace applications. All of the basic linearity, integrability, boundedness, and continuity properties of the integral calculus were proved. The work culminated in the proof of the Fundamental Theorem Of Calculus. There is a brief discussion about why mechanically checked proofs are so much longer than standard mathematics textbook proofs.

  4. Another proof of Gell-Mann and Low's theorem

    Molinari, Luca Guido

    2006-01-01

    The theorem by Gell-Mann and Low is a cornerstone in QFT and zero-temperature many-body theory. The standard proof is based on Dyson's time-ordered expansion of the propagator; a proof based on exact identities for the time-propagator is here given.

  5. Another proof of Gell-Mann and Low's theorem

    Molinari, Luca Guido

    2007-01-01

    The theorem by Gell-Mann and Low is a cornerstone in quantum field theory and zero-temperature many-body theory. The standard proof is based on Dyson's time-ordered expansion of the propagator; a proof based on exact identities for the time propagator is here given

  6. Designing learning curves for carbon capture based on chemical absorption according to the minimum work of separation

    Rochedo, Pedro R.R.; Szklo, Alexandre

    2013-01-01

    Highlights: • This work defines the minimum work of separation (MWS) for a capture process. • Findings of the analysis indicated a MWS of 0.158 GJ/t for post-combustion. • A review of commercially available processes based on chemical absorption was made. • A review of learning models was conducted, with the addition on a novel model. • A learning curve for post-combustion carbon capture was successfully designed. - Abstract: Carbon capture is one of the most important alternatives for mitigating greenhouse gas emissions in energy facilities. The post-combustion route based on chemical absorption with amine solvents is the most feasible alternative for the short term. However, this route implies in huge energy penalties, mainly related to the solvent regeneration. By defining the minimum work of separation (MWS), this study estimated the minimum energy required to capture the CO 2 emitted by coal-fired thermal power plants. Then, by evaluating solvents and processes and comparing it to the MWS, it proposes the learning model with the best fit for the post-combustion chemical absorption of CO 2 . Learning models are based on earnings from experience, which can include the intensity of research and development. In this study, three models are tested: Wright, DeJong and D and L. Findings of the thermochemical analysis indicated a MWS of 0.158 GJ/t for post-combustion. Conventional solvents currently present an energy penalty eight times the MWS. By using the MWS as a constraint, this study found that the D and L provided the best fit to the available data of chemical solvents and absorption plants. The learning rate determined through this model is very similar to the ones found in the literature

  7. Optimization of the Municipal Waste Collection Route Based on the Method of the Minimum Pairing

    Michal Petřík

    2016-01-01

    Full Text Available In the present article is shown the use of Maple program for processing of data describing the position of municipal waste sources and topology of collecting area. The data are further processed through the use of graph theory algorithms, which enable creation of collection round proposal. In this case study is described method of waste pick-up solution in a certain village of approx. 1,600 inhabitants and built-up area of approx. 30 hectares. Village has approx. 11.5 kilometers of ride able routes, with approx. 1 kilometer without waste source. The first part shows topology of the village in light of location of waste sources and capacity of the routes. In the second part are topological data converted into data that can be processed by use of the Graph Theory and the correspondent graph is shown. Optimizing collection route in a certain graph means to find the Euler circle. However, this circle can be constructed only on condition that all the vertices of the graph are of an even degree. Practically this means that is necessary to introduce auxiliary edges – paths that will be passed twice. These paths will connect vertices with odd values. The optimal solution then requires that the total length of the inserted edges was minimal possible, which corresponds to the minimum pairing method. As it is a problem of exponential complexity, it is necessary to make some simplifications. These simplifications are depicted graphically and the results are displayed in the conclusion. The resulting graph with embedded auxiliary edges can be used as a basic decision making material for creation of real collection round that respects local limitations such as one way streets or streets where is the waste collection is not possible from both sides at the same time.

  8. Minimum critical mass systems

    Dam, H. van; Leege, P.F.A. de

    1987-01-01

    An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)

  9. Non-renormalization theorems andN=2 supersymmetric backgrounds

    Butter, Daniel; Wit, Bernard de; Lodato, Ivano

    2014-01-01

    The conditions for fully supersymmetric backgrounds of general N = 2 locally supersymmetric theories are derived based on the off-shell superconformal multiplet calculus. This enables the derivation of a non-renormalization theorem for a large class of supersymmetric invariants with higher-derivative couplings. The theorem implies that the invariant and its first order variation must vanish in a fully supersymmetric background. The conjectured relation of one particular higher-derivative invariant with a specific five-dimensional invariant containing the mixed gauge-gravitational Chern-Simons term is confirmed

  10. Radon transformation on reductive symmetric spaces:Support theorems

    Kuit, Job Jacob

    2013-01-01

    We introduce a class of Radon transforms for reductive symmetric spaces, including the horospherical transforms, and derive support theorems for these transforms. A reductive symmetric space is a homogeneous space G/H for a reductive Lie group G of the Harish-Chandra class, where H is an open sub...... is based on the relation between the Radon transform and the Fourier transform on G/H, and a Paley–Wiener-shift type argument. Our results generalize the support theorem of Helgason for the Radon transform on a Riemannian symmetric space....

  11. On the proof of the first Carnot theorem in thermodynamics

    Morad, M R; Momeni, F

    2013-01-01

    The proof of the first Carnot theorem in classical thermodynamics is revisited in this study. The underlying conditions of a general proof of this principle presented by Senft (1978 Phys. Educ. 13 35–37) are explored and discussed. These conditions are analysed in more detail using a physical description of heat and work to present a simpler proof of the first principle prior to using the violation of the second law of thermodynamics. Finally, a new simple proof is also presented based on Gibbs relation. This discussion will benefit the teaching of classical thermodynamics and promote better understanding of the proof of the first Carnot theorem in general form. (paper)

  12. A variational proof of Thomson's theorem

    Fiolhais, Miguel C.N., E-mail: miguel.fiolhais@cern.ch [Department of Physics, City College of the City University of New York, 160 Convent Avenue, New York, NY 10031 (United States); Department of Physics, New York City College of Technology, 300 Jay Street, Brooklyn, NY 11201 (United States); LIP, Department of Physics, University of Coimbra, 3004-516 Coimbra (Portugal); Essén, Hanno [Department of Mechanics, Royal Institute of Technology (KTH), Stockholm SE-10044 (Sweden); Gouveia, Tomé M. [Cavendish Laboratory, 19 JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom)

    2016-08-12

    Thomson's theorem of electrostatics, which states the electric charge on a set of conductors distributes itself on the conductor surfaces to minimize the electrostatic energy, is reviewed in this letter. The proof of Thomson's theorem, based on a variational principle, is derived for a set of normal charged conductors, with and without the presence of external electric fields produced by fixed charge distributions. In this novel approach, the variations are performed on both the charge densities and electric potentials, by means of a local Lagrange multiplier associated with Poisson's equation, constraining the two variables.

  13. Expanding the Interaction Equivalency Theorem

    Brenda Cecilia Padilla Rodriguez

    2015-06-01

    Full Text Available Although interaction is recognised as a key element for learning, its incorporation in online courses can be challenging. The interaction equivalency theorem provides guidelines: Meaningful learning can be supported as long as one of three types of interactions (learner-content, learner-teacher and learner-learner is present at a high level. This study sought to apply this theorem to the corporate sector, and to expand it to include other indicators of course effectiveness: satisfaction, knowledge transfer, business results and return on expectations. A large Mexican organisation participated in this research, with 146 learners, 30 teachers and 3 academic assistants. Three versions of an online course were designed, each emphasising a different type of interaction. Data were collected through surveys, exams, observations, activity logs, think aloud protocols and sales records. All course versions yielded high levels of effectiveness, in terms of satisfaction, learning and return on expectations. Yet, course design did not dictate the types of interactions in which students engaged within the courses. Findings suggest that the interaction equivalency theorem can be reformulated as follows: In corporate settings, an online course can be effective in terms of satisfaction, learning, knowledge transfer, business results and return on expectations, as long as (a at least one of three types of interaction (learner-content, learner-teacher or learner-learner features prominently in the design of the course, and (b course delivery is consistent with the chosen type of interaction. Focusing on only one type of interaction carries a high risk of confusion, disengagement or missed learning opportunities, which can be managed by incorporating other forms of interactions.

  14. On Krasnoselskii's Cone Fixed Point Theorem

    Man Kam Kwong

    2008-04-01

    Full Text Available In recent years, the Krasnoselskii fixed point theorem for cone maps and its many generalizations have been successfully applied to establish the existence of multiple solutions in the study of boundary value problems of various types. In the first part of this paper, we revisit the Krasnoselskii theorem, in a more topological perspective, and show that it can be deduced in an elementary way from the classical Brouwer-Schauder theorem. This viewpoint also leads to a topology-theoretic generalization of the theorem. In the second part of the paper, we extend the cone theorem in a different direction using the notion of retraction and show that a stronger form of the often cited Leggett-Williams theorem is a special case of this extension.

  15. Confinement, diquarks and goldstone's theorem

    Roberts, C.D.

    1996-01-01

    Determinations of the gluon propagator in the continuum and in lattice simulations are compared. A systematic truncation procedure for the quark Dyson-Schwinger and bound state Bethe-Salpeter equations is described. The procedure ensures the flavor-octet axial- vector Ward identity is satisfied order-by-order, thereby guaranteeing the preservation of Goldstone's theorem; and identifies a mechanism that simultaneously ensures the absence of diquarks in QCD and their presence in QCD N c =2 , where the color singlet diquark is the ''baryon'' of the theory

  16. Comparison theorems in Riemannian geometry

    Cheeger, Jeff

    2008-01-01

    The central theme of this book is the interaction between the curvature of a complete Riemannian manifold and its topology and global geometry. The first five chapters are preparatory in nature. They begin with a very concise introduction to Riemannian geometry, followed by an exposition of Toponogov's theorem-the first such treatment in a book in English. Next comes a detailed presentation of homogeneous spaces in which the main goal is to find formulas for their curvature. A quick chapter of Morse theory is followed by one on the injectivity radius. Chapters 6-9 deal with many of the most re

  17. Bernstein Lethargy Theorem and Reflexivity

    Aksoy, Asuman Güven; Peng, Qidi

    2018-01-01

    In this paper, we prove the equivalence of reflexive Banach spaces and those Banach spaces which satisfy the following form of Bernstein's Lethargy Theorem. Let $X$ be an arbitrary infinite-dimensional Banach space, and let the real-valued sequence $\\{d_n\\}_{n\\ge1}$ decrease to $0$. Suppose that $\\{Y_n\\}_{n\\ge1}$ is a system of strictly nested subspaces of $X$ such that $\\overline Y_n \\subset Y_{n+1}$ for all $n\\ge1$ and for each $n\\ge1$, there exists $y_n\\in Y_{n+1}\\backslash Y_n$ such that ...

  18. Cyclic graphs and Apery's theorem

    Sorokin, V N

    2002-01-01

    This is a survey of results about the behaviour of Hermite-Pade approximants for graphs of Markov functions, and a survey of interpolation problems leading to Apery's result about the irrationality of the value ζ(3) of the Riemann zeta function. The first example is given of a cyclic graph for which the Hermite-Pade problem leads to Apery's theorem. Explicit formulae for solutions are obtained, namely, Rodrigues' formulae and integral representations. The asymptotic behaviour of the approximants is studied, and recurrence formulae are found

  19. Abstract decomposition theorem and applications

    Grossberg, R; Grossberg, Rami; Lessmann, Olivier

    2005-01-01

    Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).

  20. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  1. M-AMST: an automatic 3D neuron tracing method based on mean shift and adapted minimum spanning tree.

    Wan, Zhijiang; He, Yishan; Hao, Ming; Yang, Jian; Zhong, Ning

    2017-03-29

    Understanding the working mechanism of the brain is one of the grandest challenges for modern science. Toward this end, the BigNeuron project was launched to gather a worldwide community to establish a big data resource and a set of the state-of-the-art of single neuron reconstruction algorithms. Many groups contributed their own algorithms for the project, including our mean shift and minimum spanning tree (M-MST). Although M-MST is intuitive and easy to implement, the MST just considers spatial information of single neuron and ignores the shape information, which might lead to less precise connections between some neuron segments. In this paper, we propose an improved algorithm, namely M-AMST, in which a rotating sphere model based on coordinate transformation is used to improve the weight calculation method in M-MST. Two experiments are designed to illustrate the effect of adapted minimum spanning tree algorithm and the adoptability of M-AMST in reconstructing variety of neuron image datasets respectively. In the experiment 1, taking the reconstruction of APP2 as reference, we produce the four difference scores (entire structure average (ESA), different structure average (DSA), percentage of different structure (PDS) and max distance of neurons' nodes (MDNN)) by comparing the neuron reconstruction of the APP2 and the other 5 competing algorithm. The result shows that M-AMST gets lower difference scores than M-MST in ESA, PDS and MDNN. Meanwhile, M-AMST is better than N-MST in ESA and MDNN. It indicates that utilizing the adapted minimum spanning tree algorithm which took the shape information of neuron into account can achieve better neuron reconstructions. In the experiment 2, 7 neuron image datasets are reconstructed and the four difference scores are calculated by comparing the gold standard reconstruction and the reconstructions produced by 6 competing algorithms. Comparing the four difference scores of M-AMST and the other 5 algorithm, we can conclude that

  2. Symbolic logic and mechanical theorem proving

    Chang, Chin-Liang

    1969-01-01

    This book contains an introduction to symbolic logic and a thorough discussion of mechanical theorem proving and its applications. The book consists of three major parts. Chapters 2 and 3 constitute an introduction to symbolic logic. Chapters 4-9 introduce several techniques in mechanical theorem proving, and Chapters 10 an 11 show how theorem proving can be applied to various areas such as question answering, problem solving, program analysis, and program synthesis.

  3. Equivalent conserved currents and generalized Noether's theorem

    Gordon, T.J.

    1984-01-01

    A generalized Noether theorem is presented, relating symmetries and equivalence classes of local) conservation laws in classical field theories; this is contrasted with the standard theorem. The concept of a ''Noether'' field theory is introduced, being a theory for which the generalized theorem applies; not only does this include the cases of Lagrangian and Hamiltonian field theories, these structures are ''derived'' from the Noether property in a natural way. The generalized theorem applies to currents and symmetries that contain derivatives of the fields up to an arbitrarily high order

  4. Evolutionary trees: an integer multicommodity max-flow-min-cut theorem

    Erdös, Péter L.; Szekely, László A.

    1992-01-01

    In biomathematics, the extensions of a leaf-colouration of a binary tree to the whole vertex set with minimum number of colour-changing edges are extensively studied. Our paper generalizes the problem for trees; algorithms and a Menger-type theorem are presented. The LP dual of the problem is a

  5. Transient state work fluctuation theorem for a classical harmonic ...

    Based on a Hamiltonian description we present a rigorous derivation of the transient state work fluctuation theorem and the Jarzynski equality for a classical harmonic oscillator linearly coupled to a harmonic heat bath, which is dragged by an external agent. Coupling with the bath makes the dynamics dissipative. Since we ...

  6. Limits theorems for tail processes with applications tointermediate quantile estimation

    Einmahl, J.H.J.

    1992-01-01

    A description of the weak and strong limiting behaviour of weighted uniform tail empirical and tail quantile processes is given. The results for the tail quantile process are applied to obtain weak and strong functional limit theorems for a weighted non-uniform tail-quantile-type process based on a

  7. Liouville's theorem and the method of the inverse problem

    Its, A.R.

    1985-01-01

    An approach to the investigation of the Zakharov-Shabat equations is developed. This approach is based on a classical theorem of Liouville and is the synthesis of ''finite-zone'' integration, the matrix Riemann problem method and the theory of isomonodromy deformations of differential equations. The effectiveness of the proposed scheme is demonstrated by developing ''dressing procedures'' for the Bullough-Dodd equation

  8. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  9. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  10. Maximum relevance, minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification

    Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Xie, Wu; Yan, Xiaozhen; Xu, Zhen

    2016-01-01

    Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy. (paper)

  11. Stacked spheres and lower bound theorem

    BASUDEB DATTA

    2011-11-20

    Nov 20, 2011 ... Preliminaries. Lower bound theorem. On going work. Definitions. An n-simplex is a convex hull of n + 1 affinely independent points. (called vertices) in some Euclidean space R. N . Stacked spheres and lower bound theorem. Basudeb Datta. Indian Institute of Science. 2 / 27 ...

  12. Unpacking Rouché's Theorem

    Howell, Russell W.; Schrohe, Elmar

    2017-01-01

    Rouché's Theorem is a standard topic in undergraduate complex analysis. It is usually covered near the end of the course with applications relating to pure mathematics only (e.g., using it to produce an alternate proof of the Fundamental Theorem of Algebra). The "winding number" provides a geometric interpretation relating to the…

  13. Other trigonometric proofs of Pythagoras theorem

    Luzia, Nuno

    2015-01-01

    Only very recently a trigonometric proof of the Pythagoras theorem was given by Zimba \\cite{1}, many authors thought this was not possible. In this note we give other trigonometric proofs of Pythagoras theorem by establishing, geometrically, the half-angle formula $\\cos\\theta=1-2\\sin^2 \\frac{\\theta}{2}$.

  14. On Newton’s shell theorem

    Borghi, Riccardo

    2014-03-01

    In the present letter, Newton’s theorem for the gravitational field outside a uniform spherical shell is considered. In particular, a purely geometric proof of proposition LXXI/theorem XXXI of Newton’s Principia, which is suitable for undergraduates and even skilled high-school students, is proposed. Minimal knowledge of elementary calculus and three-dimensional Euclidean geometry are required.

  15. Theorems of low energy in Compton scattering

    Chahine, J.

    1984-01-01

    We have obtained the low energy theorems in Compton scattering to third and fouth order in the frequency of the incident photon. Next we calculated the polarized cross section to third order and the unpolarized to fourth order in terms of partial amplitudes not covered by the low energy theorems, what will permit the experimental determination of these partial amplitudes. (Author) [pt

  16. A density Corradi-Hajnal theorem

    Allen, P.; Böttcher, J.; Hladký, Jan; Piguet, D.

    2015-01-01

    Roč. 67, č. 4 (2015), s. 721-758 ISSN 0008-414X Institutional support: RVO:67985840 Keywords : extremal graph theory * Mantel's theorem * Corradi-Hajnal theorem Subject RIV: BA - General Mathematics Impact factor: 0.618, year: 2015 http://cms.math.ca/10.4153/CJM-2014-030-6

  17. Visualizing the Central Limit Theorem through Simulation

    Ruggieri, Eric

    2016-01-01

    The Central Limit Theorem is one of the most important concepts taught in an introductory statistics course, however, it may be the least understood by students. Sure, students can plug numbers into a formula and solve problems, but conceptually, do they really understand what the Central Limit Theorem is saying? This paper describes a simulation…

  18. The Classical Version of Stokes' Theorem Revisited

    Markvorsen, Steen

    2008-01-01

    Using only fairly simple and elementary considerations--essentially from first year undergraduate mathematics--we show how the classical Stokes' theorem for any given surface and vector field in R[superscript 3] follows from an application of Gauss' divergence theorem to a suitable modification of the vector field in a tubular shell around the…

  19. The divergence theorem for unbounded vector fields

    De Pauw, Thierry; Pfeffer, Washek F.

    2007-01-01

    In the context of Lebesgue integration, we derive the divergence theorem for unbounded vector. elds that can have singularities at every point of a compact set whose Minkowski content of codimension greater than two is. nite. The resulting integration by parts theorem is applied to removable sets of holomorphic and harmonic functions.

  20. A Metrized Duality Theorem for Markov Processes

    Kozen, Dexter; Mardare, Radu Iulian; Panangaden, Prakash

    2014-01-01

    We extend our previous duality theorem for Markov processes by equipping the processes with a pseudometric and the algebras with a notion of metric diameter. We are able to show that the isomorphisms of our previous duality theorem become isometries in this quantitative setting. This opens the wa...

  1. Nonlinear Dynamic Surface Control of Chaos in Permanent Magnet Synchronous Motor Based on the Minimum Weights of RBF Neural Network

    Shaohua Luo

    2014-01-01

    Full Text Available This paper is concerned with the problem of the nonlinear dynamic surface control (DSC of chaos based on the minimum weights of RBF neural network for the permanent magnet synchronous motor system (PMSM wherein the unknown parameters, disturbances, and chaos are presented. RBF neural network is used to approximate the nonlinearities and an adaptive law is employed to estimate unknown parameters. Then, a simple and effective controller is designed by introducing dynamic surface control technique on the basis of first-order filters. Asymptotically tracking stability in the sense of uniformly ultimate boundedness is achieved in a short time. Finally, the performance of the proposed controller is testified through simulation results.

  2. A Theorem on Grid Access Control

    XU ZhiWei(徐志伟); BU GuanYing(卜冠英)

    2003-01-01

    The current grid security research is mainly focused on the authentication of grid systems. A problem to be solved by grid systems is to ensure consistent access control. This problem is complicated because the hosts in a grid computing environment usually span multiple autonomous administrative domains. This paper presents a grid access control model, based on asynchronous automata theory and the classic Bell-LaPadula model. This model is useful to formally study the confidentiality and integrity problems in a grid computing environment. A theorem is proved, which gives the necessary and sufficient conditions to a grid to maintain confidentiality.These conditions are the formalized descriptions of local (node) relations or relationship between grid subjects and node subjects.

  3. Minimum long-term cost solution for remote telecommunication stations on the basis of photovoltaic-based hybrid power systems

    Kaldellis, J.K.; Ninou, I.; Zafirakis, D.

    2011-01-01

    In the case of the telecommunication (T/C) services' expansion to rural and remote areas, the market generally responds with the minimum investments required. Considering the existing situation, cost-effective operation of the T/C infrastructure installed in these regions (i.e. remote T/C stations) becomes critical. However, since in most cases grid-connection is not feasible, the up-to-now electrification solution for remote T/C stations is based on the operation of costly, oil consuming and heavy polluting diesel engines. Instead, the use of photovoltaic (PV)-based hybrid power stations is currently examined, using as a case study a representative remote T/C station of the Greek territory. In this context, the present study is concentrated on the detailed cost-benefit analysis of the proposed solution. More precisely, the main part of the analysis is devoted to develop a complete electricity production cost model, accordingly applied for numerous oil consumption and service period scenarios. Note that in all cases examined, zero load rejections is a prerequisite while minimum long-term cost solutions designated are favorably compared with the diesel-only solution. Finally, a sensitivity analysis, demonstrating the impact of the main economic parameters on the energy production cost of optimum sized PV-diesel hybrid power stations, is also provided. - Research highlights: → Expansion of telecommunication (T/C) in remote areas is vital for their development. → Off-grid T/C stations employed in such areas operate on diesel engines. → The use of PV-diesel-battery hybrid power stations is currently examined. → A detailed long-term electricity production cost model is developed. → Cost-effectiveness of the proposed system is reflected for numerous configurations.

  4. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  5. An Adaptive Clustering Approach Based on Minimum Travel Route Planning for Wireless Sensor Networks with a Mobile Sink.

    Tang, Jiqiang; Yang, Wu; Zhu, Lingyun; Wang, Dong; Feng, Xin

    2017-04-26

    In recent years, Wireless Sensor Networks with a Mobile Sink (WSN-MS) have been an active research topic due to the widespread use of mobile devices. However, how to get the balance between data delivery latency and energy consumption becomes a key issue of WSN-MS. In this paper, we study the clustering approach by jointly considering the Route planning for mobile sink and Clustering Problem (RCP) for static sensor nodes. We solve the RCP problem by using the minimum travel route clustering approach, which applies the minimum travel route of the mobile sink to guide the clustering process. We formulate the RCP problem as an Integer Non-Linear Programming (INLP) problem to shorten the travel route of the mobile sink under three constraints: the communication hops constraint, the travel route constraint and the loop avoidance constraint. We then propose an Imprecise Induction Algorithm (IIA) based on the property that the solution with a small hop count is more feasible than that with a large hop count. The IIA algorithm includes three processes: initializing travel route planning with a Traveling Salesman Problem (TSP) algorithm, transforming the cluster head to a cluster member and transforming the cluster member to a cluster head. Extensive experimental results show that the IIA algorithm could automatically adjust cluster heads according to the maximum hops parameter and plan a shorter travel route for the mobile sink. Compared with the Shortest Path Tree-based Data-Gathering Algorithm (SPT-DGA), the IIA algorithm has the characteristics of shorter route length, smaller cluster head count and faster convergence rate.

  6. Uniqueness theorems in linear elasticity

    Knops, Robin John

    1971-01-01

    The classical result for uniqueness in elasticity theory is due to Kirchhoff. It states that the standard mixed boundary value problem for a homogeneous isotropic linear elastic material in equilibrium and occupying a bounded three-dimensional region of space possesses at most one solution in the classical sense, provided the Lame and shear moduli, A and J1 respectively, obey the inequalities (3 A + 2 J1) > 0 and J1>O. In linear elastodynamics the analogous result, due to Neumann, is that the initial-mixed boundary value problem possesses at most one solution provided the elastic moduli satisfy the same set of inequalities as in Kirchhoffs theorem. Most standard textbooks on the linear theory of elasticity mention only these two classical criteria for uniqueness and neglect altogether the abundant literature which has appeared since the original publications of Kirchhoff. To remedy this deficiency it seems appropriate to attempt a coherent description ofthe various contributions made to the study of uniquenes...

  7. The low-energy theorem of pion photoproduction using the Skyrme model

    Ikehashi, T.; Ohta, K.

    1995-01-01

    We reassess the validity of the current-algebra based low-energy theorem of pion photoproduction on the nucleon using the Skyrme model. We find that one of the off-shell electromagnetic form factors of the nucleon exhibits infrared divergence in the chiral limit. This contribution introduces an additional term to the threshold amplitude predicted by the low-energy theorem. The emergence of the additional term indicates an unavoidable necessity of off-shell form factors in deriving the low-energy theorem. In the case of neutral pion production, the new contribution to the threshold amplitude is found to be comparable in magnitude to the low-energy theorem's prediction and has the opposite sign. In the charged pion production channels, the correction to the theorem is shown to be relatively small. (orig.)

  8. Riemannian and Lorentzian flow-cut theorems

    Headrick, Matthew; Hubeny, Veronika E.

    2018-05-01

    We prove several geometric theorems using tools from the theory of convex optimization. In the Riemannian setting, we prove the max flow-min cut (MFMC) theorem for boundary regions, applied recently to develop a ‘bit-thread’ interpretation of holographic entanglement entropies. We also prove various properties of the max flow and min cut, including respective nesting properties. In the Lorentzian setting, we prove the analogous MFMC theorem, which states that the volume of a maximal slice equals the flux of a minimal flow, where a flow is defined as a divergenceless timelike vector field with norm at least 1. This theorem includes as a special case a continuum version of Dilworth’s theorem from the theory of partially ordered sets. We include a brief review of the necessary tools from the theory of convex optimization, in particular Lagrangian duality and convex relaxation.

  9. SU-F-T-574: MLC Based SRS Beam Commissioning - Minimum Target Size Investigation

    Zakikhani, R [Florida Cancer Specialists - Largo, Largo, FL (United States); Able, C [Florida Cancer Specialists - New Port Richey, New Port Richey, FL (United States)

    2016-06-15

    Purpose: To implement a MLC accelerator based SRS program using small fields down to 1 cm × 1 cm and to determine the smallest target size safe for clinical treatment. Methods: Computerized beam scanning was performed in water using a diode detector and a linac-head attached transmission ion chamber to characterize the small field dosimetric aspects of a 6 MV photon beam (Trilogy-Varian Medical Systems, Inc.). The output factors, PDD and profiles of field sizes 1, 2, 3, 4, and 10 cm{sup 2} were measured and utilized to create a new treatment planning system (TPS) model (AAA ver 11021). Static MLC SRS treatment plans were created and delivered to a homogeneous phantom (Cube 20, CIRS, Inc.) for a 1.0 cm and 1.5 cm “PTV” target. A 12 field DMLC plan was created for a 2.1 cm target. Radiochromic film (EBT3, Ashland Inc.) was used to measure the planar dose in the axial, coronal and sagittal planes. A micro ion chamber (0.007 cc) was used to measure the dose at isocenter for each treatment delivery. Results: The new TPS model was validated by using a tolerance criteria of 2% dose and 2 mm distance to agreement. For fields ≤ 3 cm{sup 2}, the max PDD, Profile and OF difference was 0.9%, 2%/2mm and 1.4% respectively. The measured radiochromic film planar dose distributions had gamma scores of 95.3% or higher using a 3%/2mm criteria. Ion chamber measurements for all 3 test plans effectively met our goal of delivering the dose accurately to within 5% when compared to the expected dose reported by the TPS (1 cm plan Δ= −5.2%, 1.5 cm plan Δ= −2.0%, 2 cm plan Δ= 1.5%). Conclusion: End to end testing confirmed that MLC defined SRS for target sizes ≥ 1.0 cm can be safely planned and delivered.

  10. OTTER, Resolution Style Theorem Prover

    McCune, W.W.

    2001-01-01

    1 - Description of program or function: OTTER (Other Techniques for Theorem-proving and Effective Research) is a resolution-style theorem-proving program for first-order logic with equality. OTTER includes the inference rules binary resolution, hyper-resolution, UR-resolution, and binary para-modulation. These inference rules take as small set of clauses and infer a clause. If the inferred clause is new and useful, it is stored and may become available for subsequent inferences. Other capabilities are conversion from first-order formulas to clauses, forward and back subsumption, factoring, weighting, answer literals, term ordering, forward and back demodulation, and evaluable functions and predicates. 2 - Method of solution: For its inference process OTTER uses the given-clause algorithm, which can be viewed as a simple implementation of the set of support strategy. OTTER maintains three lists of clauses: axioms, sos (set of support), and demodulators. OTTER is not automatic. Even after the user has encoded a problem into first-order logic or into clauses, the user must choose inference rules, set options to control the processing of inferred clauses, and decide which input formulae or clauses are to be in the initial set of support and which, if any, equalities are to be demodulators. If OTTER fails to find a proof, the user may try again different initial conditions. 3 - Restrictions on the complexity of the problem - Maxima of: 5000 characters in an input string, 64 distinct variables in a clause, 51 characters in any symbol. The maxima can be changed by finding the appropriate definition in the header.h file, increasing the limit, and recompiling OTTER. There are a few constraints on the order of commands

  11. A New Minimum Trees-Based Approach for Shape Matching with Improved Time Computing: Application to Graphical Symbols Recognition

    Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy

    Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.

  12. A test to measure the minimum burning pressure of water-based commercial explosives and their precursors

    Turcotte, R.; Feng, H.; Badeen, C.M.; Goldthorp, S.; Johnson, C. [Natural Resources Canada, Ottawa, ON (Canada). Canadian Explosives Research Laboratory; Chan, S.K. [Orica Canada Inc., Brownsburg-Chatham, PQ (Canada)

    2009-05-15

    This paper described a testing protocol developed to measure the minimum burning pressure (MBP) of ammonium nitrate water-based emulsions (AWEs). Oxidizer solutions were prepared in a stainless steel beaker. A modified commercial mixer was used to emulsify the oil-surfactant phase with the oxidizer solutions and blend dry ingredients. Five high water content AWEs were then prepared and placed in pressurized vessels. Samples were ignited using a straight length of nichrome wire. Emulsion samples were transferred into a cylindrical test cell painted with non-conductive paint. Copper conductor leg-wires were connected to electrodes passing through the body of the vessel. When samples were equilibrated to the desired initial pressure, a constant current was supplied to the hot wire. Solid state relays were used to switch the current power supply on and off. Hot wire voltage signals were used to obtain temperature profiles for onset and ignition temperatures. The procedure to perform the MBP measurements was based on 3 types of classifying events, namely (1) no reaction, (2) partial reaction, and (3) slow decomposition. Results of the tests demonstrated that the 5 emulsions exhibited large differences in respective MBP values. Data from the study will be used to develop standards for the authorization of high explosives in Canada. 15 refs., 1 tab., 3 figs.

  13. Wavelet-based multiscale analysis of minimum toe clearance variability in the young and elderly during walking.

    Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu

    2007-01-01

    As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (ppathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.

  14. Prolonging fuel cell stack lifetime based on Pontryagin's Minimum Principle in fuel cell hybrid vehicles and its economic influence evaluation

    Zheng, C. H.; Xu, G. Q.; Park, Y. I.; Lim, W. S.; Cha, S. W.

    2014-02-01

    The lifetime of fuel cell stacks is a major issue currently, especially for automotive applications. In order to take into account the lifetime of fuel cell stacks while considering the fuel consumption minimization in fuel cell hybrid vehicles (FCHVs), a Pontryagin's Minimum Principle (PMP)-based power management strategy is proposed in this research. This strategy has the effect of prolonging the lifetime of fuel cell stacks. However, there is a tradeoff between the fuel cell stack lifetime and the fuel consumption when this strategy is applied to an FCHV. Verifying the positive economic influence of this strategy is necessary in order to demonstrate its superiority. In this research, the economic influence of the proposed strategy is assessed according to an evaluating cost which is dependent on the fuel cell stack cost, the hydrogen cost, the fuel cell stack lifetime, and the lifetime prolonging impact on the fuel cell stack. Simulation results derived from the proposed power management strategy are also used to evaluate the economic influence. As a result, the positive economic influence of the proposed PMP-based power management strategy is proved for both current and future FCHVs.

  15. The classical version of Stokes' Theorem revisited

    Markvorsen, Steen

    2008-01-01

    Using only fairly simple and elementary considerations - essentially from first year undergraduate mathematics - we show how the classical Stokes' theorem for any given surface and vector field in $\\mathbb{R}^{3}$ follows from an application of Gauss' divergence theorem to a suitable modification...... exercise, which simply relates the concepts of divergence and curl on the local differential level. The rest of the paper uses only integration in $1$, $2$, and $3$ variables together with a 'fattening' technique for surfaces and the inverse function theorem....

  16. Black holes, information, and the universal coefficient theorem

    Patrascu, Andrei T. [Department of Physics and Astronomy, University College London, London WC1E 6BT (United Kingdom)

    2016-07-15

    General relativity is based on the diffeomorphism covariant formulation of the laws of physics while quantum mechanics is based on the principle of unitary evolution. In this article, I provide a possible answer to the black hole information paradox by means of homological algebra and pairings generated by the universal coefficient theorem. The unitarity of processes involving black holes is restored by the demanding invariance of the laws of physics to the change of coefficient structures in cohomology.

  17. Security Theorems via Model Theory

    Joshua Guttman

    2009-11-01

    Full Text Available A model-theoretic approach can establish security theorems for cryptographic protocols. Formulas expressing authentication and non-disclosure properties of protocols have a special form. They are quantified implications for all xs . (phi implies for some ys . psi. Models (interpretations for these formulas are *skeletons*, partially ordered structures consisting of a number of local protocol behaviors. *Realized* skeletons contain enough local sessions to explain all the behavior, when combined with some possible adversary behaviors. We show two results. (1 If phi is the antecedent of a security goal, then there is a skeleton A_phi such that, for every skeleton B, phi is satisfied in B iff there is a homomorphism from A_phi to B. (2 A protocol enforces for all xs . (phi implies for some ys . psi iff every realized homomorphic image of A_phi satisfies psi. Hence, to verify a security goal, one can use the Cryptographic Protocol Shapes Analyzer CPSA (TACAS, 2007 to identify minimal realized skeletons, or "shapes," that are homomorphic images of A_phi. If psi holds in each of these shapes, then the goal holds.

  18. A generalization of Schauder's theorem and its application to Cauchy-Kovalevskaya problem

    Oleg Zubelevich

    2003-05-01

    Full Text Available We extend the classical majorant functions method to a PDE system which right hand side is a mapping of one functional space to another. This extension is based on some generalization of the Schauder fixed point theorem.

  19. Dimensional analysis beyond the Pi theorem

    Zohuri, Bahman

    2017-01-01

    Dimensional Analysis and Physical Similarity are well understood subjects, and the general concepts of dynamical similarity are explained in this book. Our exposition is essentially different from those available in the literature, although it follows the general ideas known as Pi Theorem. There are many excellent books that one can refer to; however, dimensional analysis goes beyond Pi theorem, which is also known as Buckingham’s Pi Theorem. Many techniques via self-similar solutions can bound solutions to problems that seem intractable. A time-developing phenomenon is called self-similar if the spatial distributions of its properties at different points in time can be obtained from one another by a similarity transformation, and identifying one of the independent variables as time. However, this is where Dimensional Analysis goes beyond Pi Theorem into self-similarity, which has represented progress for researchers. In recent years there has been a surge of interest in self-similar solutions of the First ...

  20. Stable convergence and stable limit theorems

    Häusler, Erich

    2015-01-01

    The authors present a concise but complete exposition of the mathematical theory of stable convergence and give various applications in different areas of probability theory and mathematical statistics to illustrate the usefulness of this concept. Stable convergence holds in many limit theorems of probability theory and statistics – such as the classical central limit theorem – which are usually formulated in terms of convergence in distribution. Originated by Alfred Rényi, the notion of stable convergence is stronger than the classical weak convergence of probability measures. A variety of methods is described which can be used to establish this stronger stable convergence in many limit theorems which were originally formulated only in terms of weak convergence. Naturally, these stronger limit theorems have new and stronger consequences which should not be missed by neglecting the notion of stable convergence. The presentation will be accessible to researchers and advanced students at the master's level...

  1. Theorem on axially symmetric gravitational vacuum configurations

    Papadopoulos, A; Le Denmat, G [Paris-6 Univ., 75 (France). Inst. Henri Poincare

    1977-01-24

    A theorem is proved which asserts the non-existence of axially symmetric gravitational vacuum configurations with non-stationary rotation only. The eventual consequences in black-hole physics are suggested.

  2. Non-renormalisation theorems in string theory

    Vanhove, P.

    2007-10-01

    In this thesis we describe various non renormalisation theorems for the string effective action. These results are derived in the context of the M theory conjecture allowing to connect the four gravitons string theory S matrix elements with that of eleven dimensional supergravity. These theorems imply that N = 8 supergravity theory has the same UV behaviour as the N = 4 supersymmetric Yang Mills theory at least up to three loops, and could be UV finite in four dimensions. (author)

  3. There is No Quantum Regression Theorem

    Ford, G.W.; OConnell, R.F.

    1996-01-01

    The Onsager regression hypothesis states that the regression of fluctuations is governed by macroscopic equations describing the approach to equilibrium. It is here asserted that this hypothesis fails in the quantum case. This is shown first by explicit calculation for the example of quantum Brownian motion of an oscillator and then in general from the fluctuation-dissipation theorem. It is asserted that the correct generalization of the Onsager hypothesis is the fluctuation-dissipation theorem. copyright 1996 The American Physical Society

  4. The matrix Euler-Fermat theorem

    Arnol'd, Vladimir I

    2004-01-01

    We prove many congruences for binomial and multinomial coefficients as well as for the coefficients of the Girard-Newton formula in the theory of symmetric functions. These congruences also imply congruences (modulo powers of primes) for the traces of various powers of matrices with integer elements. We thus have an extension of the matrix Fermat theorem similar to Euler's extension of the numerical little Fermat theorem

  5. Level comparison theorems and supersymmetric quantum mechanics

    Baumgartner, B.; Grosse, H.

    1986-01-01

    The sign of the Laplacian of the spherical symmetric potential determines the order of energy levels with the same principal Coulomb quantum number. This recently derived theorem has been generalized, extended and applied to various situations in particle, nuclear and atomic physics. Besides a comparison theorem the essential step was the use of supersymmetric quantum mechanics. Recently worked out applications of supersymmetric quantum mechanics to index problems of Dirac operators are mentioned. (Author)

  6. Liouville's theorem and phase-space cooling

    Mills, R.L.; Sessler, A.M.

    1993-01-01

    A discussion is presented of Liouville's theorem and its consequences for conservative dynamical systems. A formal proof of Liouville's theorem is given. The Boltzmann equation is derived, and the collisionless Boltzmann equation is shown to be rigorously true for a continuous medium. The Fokker-Planck equation is derived. Discussion is given as to when the various equations are applicable and, in particular, under what circumstances phase space cooling may occur

  7. The Osgood-Schoenflies theorem revisited

    Siebenmann, L C

    2005-01-01

    The very first unknotting theorem of a purely topological character established that every compact subset of the Euclidean plane homeomorphic to a circle can be moved onto a round circle by a globally defined self-homeomorphism of the plane. This difficult hundred-year-old theorem is here celebrated with a partly new elementary proof, and a first but tentative account of its history. Some quite fundamental corollaries of the proof are sketched, and some generalizations are mentioned

  8. Double soft theorem for perturbative gravity

    Saha, Arnab

    2016-01-01

    Following up on the recent work of Cachazo, He and Yuan \\cite{arXiv:1503.04816 [hep-th]}, we derive the double soft graviton theorem in perturbative gravity. We show that the double soft theorem derived using CHY formula precisely matches with the perturbative computation involving Feynman diagrams. In particular, we find how certain delicate limits of Feynman diagrams play an important role in obtaining this equivalence.

  9. A Converse of Fermat's Little Theorem

    Bruckman, P. S.

    2007-01-01

    As the name of the paper implies, a converse of Fermat's Little Theorem (FLT) is stated and proved. FLT states the following: if p is any prime, and x any integer, then x[superscript p] [equivalent to] x (mod p). There is already a well-known converse of FLT, known as Lehmer's Theorem, which is as follows: if x is an integer coprime with m, such…

  10. The large deviations theorem and ergodicity

    Gu Rongbao

    2007-01-01

    In this paper, some relationships between stochastic and topological properties of dynamical systems are studied. For a continuous map f from a compact metric space X into itself, we show that if f satisfies the large deviations theorem then it is topologically ergodic. Moreover, we introduce the topologically strong ergodicity, and prove that if f is a topologically strongly ergodic map satisfying the large deviations theorem then it is sensitively dependent on initial conditions

  11. Pascal’s Theorem in Real Projective Plane

    Coghetto Roland

    2017-01-01

    In this article we check, with the Mizar system [2], Pascal’s theorem in the real projective plane (in projective geometry Pascal’s theorem is also known as the Hexagrammum Mysticum Theorem)1. Pappus’ theorem is a special case of a degenerate conic of two lines.

  12. Pascal’s Theorem in Real Projective Plane

    Coghetto Roland

    2017-07-01

    Full Text Available In this article we check, with the Mizar system [2], Pascal’s theorem in the real projective plane (in projective geometry Pascal’s theorem is also known as the Hexagrammum Mysticum Theorem1. Pappus’ theorem is a special case of a degenerate conic of two lines.

  13. The direct Flow parametric Proof of Gauss' Divergence Theorem revisited

    Markvorsen, Steen

    The standard proof of the divergence theorem in undergraduate calculus courses covers the theorem for static domains between two graph surfaces. We show that within first year undergraduate curriculum, the flow proof of the dynamic version of the divergence theorem - which is usually considered...... we apply the key instrumental concepts and verify the various steps towards this alternative proof of the divergence theorem....

  14. Commentaries on Hilbert's Basis Theorem | Apine | Science World ...

    The famous basis theorem of David Hilbert is an important theorem in commutative algebra. In particular the Hilbert's basis theorem is the most important source of Noetherian rings which are by far the most important class of rings in commutative algebra. In this paper we have used Hilbert's theorem to examine their unique ...

  15. Illustrating the Central Limit Theorem through Microsoft Excel Simulations

    Moen, David H.; Powell, John E.

    2005-01-01

    Using Microsoft Excel, several interactive, computerized learning modules are developed to demonstrate the Central Limit Theorem. These modules are used in the classroom to enhance the comprehension of this theorem. The Central Limit Theorem is a very important theorem in statistics, and yet because it is not intuitively obvious, statistics…

  16. Theorem on magnet fringe field

    Wei, Jie; Talman, R.

    1995-01-01

    Transverse particle motion in particle accelerators is governed almost totally by non-solenoidal magnets for which the body magnetic field can be expressed as a series expansion of the normal (b n ) and skew (a n ) multipoles, B y + iB x = summation(b n + ia n )(x + iy) n , where x, y, and z denote horizontal, vertical, and longitudinal (along the magnet) coordinates. Since the magnet length L is necessarily finite, deflections are actually proportional to ''field integrals'' such as bar BL ≡ ∫ B(x,y,z)dz where the integration range starts well before the magnet and ends well after it. For bar a n , bar b n , bar B x , and bar B y defined this way, the same expansion Eq. 1 is valid and the ''standard'' approximation is to neglect any deflections not described by this expansion, in spite of the fact that Maxwell's equations demand the presence of longitudinal field components at the magnet ends. The purpose of this note is to provide a semi-quantitative estimate of the importance of |Δp ∝ |, the transverse deflection produced by the ion-gitudinal component of the fringe field at one magnet end relative to |Δp 0 |, the total deflection produced by passage through the whole magnet. To emphasize the generality and simplicity of the result it is given in the form of a theorem. The essence of the proof is an evaluation of the contribution of the longitudinal field B x from the vicinity of one magnet end since, along a path parallel to the magnet axis such as path BC

  17. Approximation theorems by Meyer-Koenig and Zeller type operators

    Ali Ozarslan, M.; Duman, Oktay

    2009-01-01

    This paper is mainly connected with the approximation properties of Meyer-Koenig and Zeller (MKZ) type operators. We first introduce a general sequence of MKZ operators based on q-integers and then obtain a Korovkin-type approximation theorem for these operators. We also compute their rates of convergence by means of modulus of continuity and the elements of Lipschitz class functionals. Furthermore, we give an rth order generalization of our operators in order to get some explicit approximation results.

  18. Isomorphism Theorem on Vector Spaces over a Ring

    Futa Yuichi

    2017-10-01

    Full Text Available In this article, we formalize in the Mizar system [1, 4] some properties of vector spaces over a ring. We formally prove the first isomorphism theorem of vector spaces over a ring. We also formalize the product space of vector spaces. ℤ-modules are useful for lattice problems such as LLL (Lenstra, Lenstra and Lovász [5] base reduction algorithm and cryptographic systems [6, 2].

  19. Low energy theorems of hidden local symmetries

    Harada, Masayasu; Kugo, Taichiro; Yamawaki, Koichi.

    1994-01-01

    We prove to all orders of the loop expansion the low energy theorems of hidden local symmetries in four-dimensional nonlinear sigma models based on the coset space G/H, with G and H being arbitrary compact groups. Although the models are non-renormalizable, the proof is done in an analogous manner to the renormalization proof of gauge theories and two-dimensional nonlinear sigma models by restricting ourselves to the operators with two derivatives (counting a hidden gauge boson field as one derivative), i.e., with dimension 2, which are the only operators relevant to the low energy limit. Through loop-wise mathematical induction based on the Ward-Takahashi identity for the BRS symmetry, we solve renormalization equation for the effective action up to dimension-2 terms plus terms with the relevant BRS sources. We then show that all the quantum corrections to the dimension-2 operators, including the finite parts as well as the divergent ones, can be entirely absorbed into a re-definition (renormalization) of the parameters and the fields in the dimension-2 part of the tree-level Lagrangian. (author)

  20. Quantum fluctuation theorems and power measurements

    Prasanna Venkatesh, B; Watanabe, Gentaro; Talkner, Peter

    2015-01-01

    Work in the paradigm of the quantum fluctuation theorems of Crooks and Jarzynski is determined by projective measurements of energy at the beginning and end of the force protocol. In analogy to classical systems, we consider an alternative definition of work given by the integral of the supplied power determined by integrating up the results of repeated measurements of the instantaneous power during the force protocol. We observe that such a definition of work, in spite of taking account of the process dependence, has different possible values and statistics from the work determined by the conventional two energy measurement approach (TEMA). In the limit of many projective measurements of power, the system’s dynamics is frozen in the power measurement basis due to the quantum Zeno effect leading to statistics only trivially dependent on the force protocol. In general the Jarzynski relation is not satisfied except for the case when the instantaneous power operator commutes with the total Hamiltonian at all times. We also consider properties of the joint statistics of power-based definition of work and TEMA work in protocols where both values are determined. This allows us to quantify their correlations. Relaxing the projective measurement condition, weak continuous measurements of power are considered within the stochastic master equation formalism. Even in this scenario the power-based work statistics is in general not able to reproduce qualitative features of the TEMA work statistics. (paper)

  1. Risk adjustment methods for Home Care Quality Indicators (HCQIs based on the minimum data set for home care

    Hirdes John P

    2005-01-01

    Full Text Available Abstract Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs based on the Minimum Data Set for Home Care (MDS-HC. Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a client covariates only; b client covariates plus an "Agency Intake Profile" (AIP to adjust for ascertainment and selection bias by the agency; and c client covariates plus the intake Case Mix Index (CMI. Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did

  2. Risk adjustment methods for Home Care Quality Indicators (HCQIs) based on the minimum data set for home care

    Dalby, Dawn M; Hirdes, John P; Fries, Brant E

    2005-01-01

    Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs) based on the Minimum Data Set for Home Care (MDS-HC). Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA) in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a) client covariates only; b) client covariates plus an "Agency Intake Profile" (AIP) to adjust for ascertainment and selection bias by the agency; and c) client covariates plus the intake Case Mix Index (CMI). Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did substantially affect the

  3. Rolling bearing fault diagnosis based on time-delayed feedback monostable stochastic resonance and adaptive minimum entropy deconvolution

    Li, Jimeng; Li, Ming; Zhang, Jinfeng

    2017-08-01

    Rolling bearings are the key components in the modern machinery, and tough operation environments often make them prone to failure. However, due to the influence of the transmission path and background noise, the useful feature information relevant to the bearing fault contained in the vibration signals is weak, which makes it difficult to identify the fault symptom of rolling bearings in time. Therefore, the paper proposes a novel weak signal detection method based on time-delayed feedback monostable stochastic resonance (TFMSR) system and adaptive minimum entropy deconvolution (MED) to realize the fault diagnosis of rolling bearings. The MED method is employed to preprocess the vibration signals, which can deconvolve the effect of transmission path and clarify the defect-induced impulses. And a modified power spectrum kurtosis (MPSK) index is constructed to realize the adaptive selection of filter length in the MED algorithm. By introducing the time-delayed feedback item in to an over-damped monostable system, the TFMSR method can effectively utilize the historical information of input signal to enhance the periodicity of SR output, which is beneficial to the detection of periodic signal. Furthermore, the influence of time delay and feedback intensity on the SR phenomenon is analyzed, and by selecting appropriate time delay, feedback intensity and re-scaling ratio with genetic algorithm, the SR can be produced to realize the resonance detection of weak signal. The combination of the adaptive MED (AMED) method and TFMSR method is conducive to extracting the feature information from strong background noise and realizing the fault diagnosis of rolling bearings. Finally, some experiments and engineering application are performed to evaluate the effectiveness of the proposed AMED-TFMSR method in comparison with a traditional bistable SR method.

  4. The g-theorem and quantum information theory

    Casini, Horacio; Landea, Ignacio Salazar; Torroba, Gonzalo [Centro Atómico Bariloche and CONICET,S.C. de Bariloche, Río Negro, R8402AGP (Argentina)

    2016-10-25

    We study boundary renormalization group flows between boundary conformal field theories in 1+1 dimensions using methods of quantum information theory. We define an entropic g-function for theories with impurities in terms of the relative entanglement entropy, and we prove that this g-function decreases along boundary renormalization group flows. This entropic g-theorem is valid at zero temperature, and is independent from the g-theorem based on the thermal partition function. We also discuss the mutual information in boundary RG flows, and how it encodes the correlations between the impurity and bulk degrees of freedom. Our results provide a quantum-information understanding of (boundary) RG flow as increase of distinguishability between the UV fixed point and the theory along the RG flow.

  5. The Hellmann–Feynman theorem, the comparison theorem, and the envelope theory

    Claude Semay

    2015-01-01

    Full Text Available The envelope theory is a convenient method to compute approximate solutions for bound state equations in quantum mechanics. It is shown that these approximate solutions obey a kind of Hellmann–Feynman theorem, and that the comparison theorem can be applied to these approximate solutions for two ordered Hamiltonians.

  6. MIQE précis: Practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments

    Bustin, S.A.; Beaulieu, J.F.; Huggett, J.; Jaggi, R.; Kibenge, F.S.; Olsvik, P.A.; Penning, L.C.; Toegel, S.

    2010-01-01

    MIQE précis: Practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments Stephen A Bustin1 , Jean-François Beaulieu2 , Jim Huggett3 , Rolf Jaggi4 , Frederick SB Kibenge5 , Pål A Olsvik6 , Louis C Penning7 and Stefan Toegel8 1 Centre for

  7. Magnetostatic fields computed using an integral equation derived from Green's theorems

    Simkin, J.; Trowbridge, C.W.

    1976-04-01

    A method of computing magnetostatic fields is described that is based on a numerical solution of the integral equation obtained from Green's Theorems. The magnetic scalar potential and its normal derivative on the surfaces of volumes are found by solving a set of linear equations. These are obtained from Green's Second Theorem and the continuity conditions at interfaces between volumes. Results from a two-dimensional computer program are presented and these show the method to be accurate and efficient. (author)

  8. Weak circulation theorems as a way of distinguishing between generalized gravitation theories

    Enosh, M.

    1980-01-01

    It was proved in a previous paper that a generalized circulation theorem characterizes Einstein's theory of gravitation as a special case of a more general theory of gravitation, which is also based on the principle of equivalence. Here the question of whether it is possible to weaken this circulation theorem in such ways that it would imply more general theories than Einstein's is posed. This problem is solved. Principally, there are two possibilities. One of them is essentially Weyl's theory. (author)

  9. An analogue of Wagner's theorem for decompositions of matrix algebras

    Ivanov, D N

    2004-01-01

    Wagner's celebrated theorem states that a finite affine plane whose collineation group is transitive on lines is a translation plane. The notion of an orthogonal decomposition (OD) of a classically semisimple associative algebra introduced by the author allows one to draw an analogy between finite affine planes of order n and ODs of the matrix algebra M n (C) into a sum of subalgebras conjugate to the diagonal subalgebra. These ODs are called WP-decompositions and are equivalent to the well-known ODs of simple Lie algebras of type A n-1 into a sum of Cartan subalgebras. In this paper we give a detailed and improved proof of the analogue of Wagner's theorem for WP-decompositions of the matrix algebra of odd non-square order an outline of which was earlier published in a short note in 'Russian Math. Surveys' in 1994. In addition, in the framework of the theory of ODs of associative algebras, based on the method of idempotent bases, we obtain an elementary proof of the well-known Kostrikin-Tiep theorem on irreducible ODs of Lie algebras of type A n-1 in the case where n is a prime-power.

  10. Pengembangan Perangkat Pembelajaran Geometri Ruang dengan Model Proving Theorem

    Bambang Eko Susilo

    2016-03-01

    Full Text Available Kemampuan berpikir kritis dan kreatif mahasiswa masih lemah. Hal ini ditemukan pada mahasiswa yang mengambil mata kuliah Geometri Ruang yaitu dalam membuktikan soal-soal pembuktian (problem to proof. Mahasiswa masih menyelesaikan secara algoritmik atau prosedural sehingga diperlukan pengembangan perangkat pembelajaran Geometri Ruang berbasis kompetensi dan konservasi dengan model Proving Theorem. Dalam penelitian ini perangkat perkuliahan yang dikembangkan yaitu Silabus, Satuan Acara Perkuliahan (SAP, Kontrak Perkuliahan, Media Pembelajaran, Bahan Ajar, Tes UTS dan UAS serta Angket Karakter Konservasi telah dilaksanakan dengan baik dengan kriteria (1 validasi perangkat pembelajaran mata kuliah Geometri ruang berbasis kompetensi dan konservasi dengan model proving theorem berkategori baik dan layak digunakan dan (2 keterlaksanaan RPP pada pembelajaran yang dikembangkan secara keseluruhan berkategori baik.Critical and creative thinking abilities of students still weak. It is found in students who take Space Geometry subjects that is in solving problems to to prove. Students still finish in algorithmic or procedural so that the required the development of Space Geometry learning tools based on competency and conservation with Proving Theorem models. This is a research development which refers to the 4-D models that have been modified for the Space Geometry learning tools, second semester academic year 2014/2015. Instruments used include validation sheet, learning tools and character assessment questionnaire. In this research, the learning tools are developed, namely Syllabus, Lesson Plan, Lecture Contract, Learning Media, Teaching Material, Tests, and Character Conservation Questionnaire had been properly implemented with the criteria (1 validation of Space Geometry learning tools based on competency and conservation with Proving Theorem models categorized good and feasible to use, and (2 the implementation of Lesson Plan on learning categorized

  11. Subexponential estimates in Shirshov's theorem on height

    Belov, Aleksei Ya; Kharitonov, Mikhail I

    2012-01-01

    Suppose that F 2,m is a free 2-generated associative ring with the identity x m =0. In 1993 Zelmanov put the following question: is it true that the nilpotency degree of F 2,m has exponential growth? We give the definitive answer to Zelmanov's question by showing that the nilpotency class of an l-generated associative algebra with the identity x d =0 is smaller than Ψ(d,d,l), where Ψ(n,d,l)=2 18 l(nd) 3log 3 (nd)+13 d 2 . This result is a consequence of the following fact based on combinatorics of words. Let l, n and d≥n be positive integers. Then all words over an alphabet of cardinality l whose length is not less than Ψ(n,d,l) are either n-divisible or contain x d ; a word W is n-divisible if it can be represented in the form W=W 0 W 1 …W n so that W 1 ,...,W n are placed in lexicographically decreasing order. Our proof uses Dilworth's theorem (according to V.N. Latyshev's idea). We show that the set of not n-divisible words over an alphabet of cardinality l has height h 87 l·n 12log 3 n+48 . Bibliography: 40 titles.

  12. 29 CFR 4.1b - Payment of minimum compensation based on collectively bargained wage rates and fringe benefits...

    2010-07-01

    ... bargained wage rates and fringe benefits applicable to employment under predecessor contract. 4.1b Section 4... collectively bargained wage rates and fringe benefits applicable to employment under predecessor contract. (a) Section 4(c) of the Service Contract Act of 1965 as amended provides special minimum wage and fringe...

  13. Decoding linear error-correcting codes up to half the minimum distance with Gröbner bases

    Bulygin, S.; Pellikaan, G.R.; Sala, M.; Mora, T.; Perret, L.; Sakata, S.; Traverso, C.

    2009-01-01

    In this short note we show how one can decode linear error-correcting codes up to half the minimum distance via solving a system of polynomial equations over a finite field. We also explicitly present the reduced Gröbner basis for the system considered.

  14. Portfolio theory of optimal isometric force production: Variability predictions and nonequilibrium fluctuation dissipation theorem

    Frank, T. D.; Patanarapeelert, K.; Beek, P. J.

    2008-05-01

    We derive a fundamental relationship between the mean and the variability of isometric force. The relationship arises from an optimal collection of active motor units such that the force variability assumes a minimum (optimal isometric force). The relationship is shown to be independent of the explicit motor unit properties and of the dynamical features of isometric force production. A constant coefficient of variation in the asymptotic regime and a nonequilibrium fluctuation-dissipation theorem for optimal isometric force are predicted.

  15. Portfolio theory of optimal isometric force production: Variability predictions and nonequilibrium fluctuation-dissipation theorem

    Frank, T.D.; Patanarapeelert, K.; Beek, P.J.

    2008-01-01

    We derive a fundamental relationship between the mean and the variability of isometric force. The relationship arises from an optimal collection of active motor units such that the force variability assumes a minimum (optimal isometric force). The relationship is shown to be independent of the explicit motor unit properties and of the dynamical features of isometric force production. A constant coefficient of variation in the asymptotic regime and a nonequilibrium fluctuation-dissipation theorem for optimal isometric force are predicted

  16. The new ISR and collider anti p-p and p-p data and asymptotic theorems

    Nicolescu, B.

    1982-10-01

    We present a general discussion of the rigorous finite-energy effects of asymptotic theorems, with a special emphasis on the confrontation with the new ISR and collider antipp and pp total cross-section data. We point out the possible existence of a minimum in the difference between the antipp and pp total cross-sections

  17. Thermodynamical and Green function many-body Wick theorems

    Westwanski, B.

    1987-01-01

    The thermodynamical and Green function many-body reduction theorems of Wick type are proved for the arbitrary mixtures of the fermion, boson and spin systems. ''Many-body'' means that the operators used are the products of the arbitrary number of one-body standard basis operators [of the fermion or (and) spin types] with different site (wave vector) indices, but having the same ''time'' (in the interaction representation). The method of proving is based on'' 1) the first-order differential equation of Schwinger type for: 1a) anti T-product of operators; 1b) its average value; 2) KMS boundary conditions for this average. It is shown that the fermion, boson and spin systems can be unified in the many-body formulation (bosonification of the fermion systems). It is impossible in the one-body approach. Both of the many-body versions of the Wick theorem have the recurrent feature: nth order moment diagrams for the free energy or Green functions can be expressed by the (n-1)th order ones. This property corresponds to the automatic realization of: (i) summations over Bose-Einstein or (and) Fermi-Dirac frequencies; (ii) elimination of Bose-Einstein or (and) Fermi-Dirac distributions. The procedures (i) and (ii), being the results of using the Green function one-body reduction theorem, have constituted the significant difficulty up to now in the treatment of quantum systems. (orig.)

  18. Gleason-Busch theorem for sequential measurements

    Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah

    2017-12-01

    Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.

  19. Adiabatic Theorem for Quantum Spin Systems

    Bachmann, S.; De Roeck, W.; Fraas, M.

    2017-08-01

    The first proof of the quantum adiabatic theorem was given as early as 1928. Today, this theorem is increasingly applied in a many-body context, e.g., in quantum annealing and in studies of topological properties of matter. In this setup, the rate of variation ɛ of local terms is indeed small compared to the gap, but the rate of variation of the total, extensive Hamiltonian, is not. Therefore, applications to many-body systems are not covered by the proofs and arguments in the literature. In this Letter, we prove a version of the adiabatic theorem for gapped ground states of interacting quantum spin systems, under assumptions that remain valid in the thermodynamic limit. As an application, we give a mathematical proof of Kubo's linear response formula for a broad class of gapped interacting systems. We predict that the density of nonadiabatic excitations is exponentially small in the driving rate and the scaling of the exponent depends on the dimension.

  20. A proof of the Kochen–Specker theorem can always be converted to a state-independent noncontextuality inequality

    Yu, Xiao-Dong; Tong, D M; Guo, Yan-Qing

    2015-01-01

    Quantum contextuality is one of the fundamental notions in quantum mechanics. Proofs of the Kochen–Specker theorem and noncontextuality inequalities are two means for revealing the contextuality phenomenon in quantum mechanics. It has been found that some proofs of the Kochen-Specker theorem, such as those based on rays, can be converted to a state-independent noncontextuality inequality, but it remains open whether this is true in general, i.e., whether any proof of the Kochen-Specker theorem can always be converted to a noncontextuality inequality. In this paper, we address this issue. We prove that all kinds of proofs of the Kochen-Specker theorem, based on rays or any other observables, can always be converted to state-independent noncontextuality inequalities. Besides, our constructive proof also provides a general approach for deriving a state-independent noncontextuality inequality from a proof of the KS theorem. (paper)

  1. A uniform Tauberian theorem in dynamic games

    Khlopin, D. V.

    2018-01-01

    Antagonistic dynamic games including games represented in normal form are considered. The asymptotic behaviour of value in these games is investigated as the game horizon tends to infinity (Cesàro mean) and as the discounting parameter tends to zero (Abel mean). The corresponding Abelian-Tauberian theorem is established: it is demonstrated that in both families the game value uniformly converges to the same limit, provided that at least one of the limits exists. Analogues of one-sided Tauberian theorems are obtained. An example shows that the requirements are essential even for control problems. Bibliography: 31 titles.

  2. The aftermath of the intermediate value theorem

    Morales Claudio H

    2004-01-01

    Full Text Available The solvability of nonlinear equations has awakened great interest among mathematicians for a number of centuries, perhaps as early as the Babylonian culture (3000–300 B.C.E.. However, we intend to bring to our attention that some of the problems studied nowadays appear to be amazingly related to the time of Bolzano's era (1781–1848. Indeed, this Czech mathematician or perhaps philosopher has rigorously proven what is known today as the intermediate value theorem, a result that is intimately related to various classical theorems that will be discussed throughout this work.

  3. Pauli and the spin-statistics theorem

    Duck, Ian M

    1997-01-01

    This book makes broadly accessible an understandable proof of the infamous spin-statistics theorem. This widely known but little-understood theorem is intended to explain the fact that electrons obey the Pauli exclusion principle. This fact, in turn, explains the periodic table of the elements and their chemical properties. Therefore, this one simply stated fact is responsible for many of the principal features of our universe, from chemistry to solid state physics to nuclear physics to the life cycle of stars.In spite of its fundamental importance, it is only a slight exaggeration to say that

  4. At math meetings, enormous theorem eclipses fermat.

    Cipra, B

    1995-02-10

    Hardly a word was said about Fermat's Last Theorem at the joint meetings of the American Mathematical Society and the Mathematical Association of America, held this year from 4 to 7 January in San Francisco. For Andrew Wiles's proof, no news is good news: There are no reports of mistakes. But mathematicians found plenty of other topics to discuss. Among them: a computational breakthrough in the study of turbulent diffusion and progress in slimming down the proof of an important result in group theory, whose original size makes checking the proof of Fermat's Last Theorem look like an afternoon's pastime.

  5. Effects of Important Parameters Variations on Computing Eigenspace-Based Minimum Variance Weights for Ultrasound Tissue Harmonic Imaging

    Heidari, Mehdi Haji; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza

    2018-01-01

    In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signa...

  6. Resazurin-based 96-well plate microdilution method for the determination of minimum inhibitory concentration of biosurfactants.

    Elshikh, Mohamed; Ahmed, Syed; Funston, Scott; Dunlop, Paul; McGaw, Mark; Marchant, Roger; Banat, Ibrahim M

    2016-06-01

    To develop and validate a microdilution method for measuring the minimum inhibitory concentration (MIC) of biosurfactants. A standardized microdilution method including resazurin dye has been developed for measuring the MIC of biosurfactants and its validity was established through the replication of tetracycline and gentamicin MIC determination with standard bacterial strains. This new method allows the generation of accurate MIC measurements, whilst overcoming critical issues related to colour and solubility which may interfere with growth measurements for many types of biosurfactant extracts.

  7. The Minimum Cost of a Nutritious Diet Study: Building an evidence-base for the prevention of undernutrition in Afghanistan

    Qarizada, Ahmad Nawid

    2014-01-01

    Full text: Background: In Afghanistan, mortality rates are amongst the highest in the world. Mean life expectancy is 62 years, U5MR is 97 deaths per 1,000 live births, and the MMR is 327 deaths per 100,000 live births while 33% of the population is food insecure. Undernutrition is alarmingly high in children under-five with global acute malnutrition rates of 8.7%, stunting 60.5% and underweight 37.7% , , and 72% are iodine and iron deficient. As part of their prevention efforts, WFP and the MOPH carried out a Cost of Diet study (CoD) in Afghanistan in late-2012. Cost of Diet Study The CoD assesses a household’s food and nutrition security based on economic constraints in accessing their nutrient requirements, especially for the most vulnerable, such as children U2 years. Objectives: 1. How important is access to nutritious food to overcome undernutrition in different areas of Afghanistan? 2. Is a nutritious diet available and affordable to the local populations? Methodology: The CoD tool used linear optimization to generate following output from market surveys and secondary household data: • A diet and the corresponding food baskets meet all nutritional requirements of a typical family, including a child U2 years, and its costs. Any other diet would be more expensive and/or would not meet their nutritional requirements. The tool calculated minimum cost of nutritious diet (MCNUT) in four livelihood zones (LHZ) of Afghanistan. Results: The MCNUT is the baseline nutritious diet. When compared to household income, it shows the number of households who cannot afford to meet their nutrient needs. The MCNUT calculates cheapest combination of food items and quantities to ensure all energy and nutrient requirements are met. It is theoretical and sometimes unrealistic. The Locally Adapted, Cost Optimised Diet (LACON), obtained using questionnaires and focus group discussions, provides a more realistic diet based on dietary preferences. Findings showed that approximately

  8. Ertel's vorticity theorem and new flux surfaces in multi-fluid plasmas

    Hameiri, Eliezer

    2013-01-01

    Dedicated to Professor Harold Weitzner on the occasion of his retirement“Say to wisdom ‘you are my sister,’ and to insight ‘you are my relative.’”—Proverbs 7:4Based on an extension to plasmas of Ertel's classical vorticity theorem in fluid dynamics, it is shown that for each species in a multi-fluid plasma there can be constructed a set of nested surfaces that have this species' fluid particles confined within them. Variational formulations for the plasma evolution and its equilibrium states are developed, based on the new surfaces and all of the dynamical conservation laws associated with them. It is shown that in the general equilibrium case, the energy principle lacks a minimum and cannot be used as a stability criterion. A limit of the variational integral yields the two-fluid Hall-magnetohydrodynamic (MHD) model. A further special limit yields MHD equilibria and can be used to approximate the equilibrium state of a Hall-MHD plasma in a perturbative way

  9. Mean value theorem in topological vector spaces

    Khan, L.A.

    1994-08-01

    The aim of this note is to give shorter proofs of the mean value theorem, the mean value inequality, and the mean value inclusion for the class of Gateaux differentiable functions having values in a topological vector space. (author). 6 refs

  10. 1/4-pinched contact sphere theorem

    Ge, Jian; Huang, Yang

    2016-01-01

    Given a closed contact 3-manifold with a compatible Riemannian metric, we show that if the sectional curvature is 1/4-pinched, then the contact structure is universally tight. This result improves the Contact Sphere Theorem in [EKM12], where a 4/9-pinching constant was imposed. Some tightness...

  11. Generalized Friedland's theorem for C0-semigroups

    Cichon, Dariusz; Jung, Il Bong; Stochel, Jan

    2008-07-01

    Friedland's characterization of bounded normal operators is shown to hold for infinitesimal generators of C0-semigroups. New criteria for normality of bounded operators are furnished in terms of Hamburger moment problem. All this is achieved with the help of the celebrated Ando's theorem on paranormal operators.

  12. Automated theorem proving theory and practice

    Newborn, Monty

    2001-01-01

    As the 21st century begins, the power of our magical new tool and partner, the computer, is increasing at an astonishing rate. Computers that perform billions of operations per second are now commonplace. Multiprocessors with thousands of little computers - relatively little! -can now carry out parallel computations and solve problems in seconds that only a few years ago took days or months. Chess-playing programs are on an even footing with the world's best players. IBM's Deep Blue defeated world champion Garry Kasparov in a match several years ago. Increasingly computers are expected to be more intelligent, to reason, to be able to draw conclusions from given facts, or abstractly, to prove theorems-the subject of this book. Specifically, this book is about two theorem-proving programs, THEO and HERBY. The first four chapters contain introductory material about automated theorem proving and the two programs. This includes material on the language used to express theorems, predicate calculus, and the rules of...

  13. Answering Junior Ant's "Why" for Pythagoras' Theorem

    Pask, Colin

    2002-01-01

    A seemingly simple question in a cartoon about Pythagoras' Theorem is shown to lead to questions about the nature of mathematical proof and the profound relationship between mathematics and science. It is suggested that an analysis of the issues involved could provide a good vehicle for classroom discussions or projects for senior students.…

  14. A Short Proof of Klee's Theorem

    Zanazzi, John J.

    2013-01-01

    In 1959, Klee proved that a convex body $K$ is a polyhedron if and only if all of its projections are polygons. In this paper, a new proof of this theorem is given for convex bodies in $\\mathbb{R}^3$.

  15. On Noethers theorem in quantum field theory

    Buchholz, D.; Doplicher, S.; Longo, R.

    1985-03-01

    Extending an earlier construction of local generators of symmetries in (S. Doplicher, 1982) to space-time and supersymmetries, we establish a weak form of Noethers theorem in quantum field theory. We also comment on the physical significance of the 'split property', underlying our analysis, and discuss some local aspects of superselection rules following from our results. (orig./HSI)

  16. Green-Tao theorem in function fields

    Le, Thai Hoang

    2009-01-01

    We adapt the proof of the Green-Tao theorem on arithmetic progressions in primes to the setting of polynomials over a finite field, to show that for every $k$, the irreducible polynomials in $\\mathbf{F}_q[t]$ contain configurations of the form $\\{f+ Pg : \\d(P)

  17. Central Limit Theorem for Coloured Hard Dimers

    Maria Simonetta Bernabei

    2010-01-01

    Full Text Available We study the central limit theorem for a class of coloured graphs. This means that we investigate the limit behavior of certain random variables whose values are combinatorial parameters associated to these graphs. The techniques used at arriving this result comprise combinatorics, generating functions, and conditional expectations.

  18. Reciprocity theorem in high-temperature superconductors

    Janeček, I.; Vašek, Petr

    2003-01-01

    Roč. 390, - (2003), s. 330-340 ISSN 0921-4534 R&D Projects: GA ČR GA202/00/1602; GA AV ČR IAA1010919 Institutional research plan: CEZ:AV0Z1010914 Keywords : transport properties * reciprocity theorem Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.192, year: 2003

  19. Some Generalizations of Jungck's Fixed Point Theorem

    J. R. Morales

    2012-01-01

    Full Text Available We are going to generalize the Jungck's fixed point theorem for commuting mappings by mean of the concepts of altering distance functions and compatible pair of mappings, as well as, by using contractive inequalities of integral type and contractive inequalities depending on another function.

  20. Limit theorems for functionals of Gaussian vectors

    Hongshuai DAI; Guangjun SHEN; Lingtao KONG

    2017-01-01

    Operator self-similar processes,as an extension of self-similar processes,have been studied extensively.In this work,we study limit theorems for functionals of Gaussian vectors.Under some conditions,we determine that the limit of partial sums of functionals of a stationary Gaussian sequence of random vectors is an operator self-similar process.

  1. Bell's theorem and the nature of reality

    Bertlmann, R.A.

    1988-01-01

    We rediscuss the Einstein-Podolsky-Rosen paradox in Bohm's spin version and oppose to it Bohr's controversial point of view. Then we explain Bell's theorem, Bell inequalities and its consequences. We describe the experiment of Aspect, Dalibard and Roger in detail. Finally we draw attention to the nonlocal structure of the underlying theory. 61 refs., 8 tabs. (Author)

  2. A Density Turán Theorem

    Narins, L.; Tran, Tuan

    2017-01-01

    Roč. 85, č. 2 (2017), s. 496-524 ISSN 0364-9024 Institutional support: RVO:67985807 Keywords : Turán’s theorem * stability method * multipartite version Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.601, year: 2016

  3. H-theorems from macroscopic autonomous equations

    De Roeck, W.; Maes, C.; Netočný, Karel

    2006-01-01

    Roč. 123, č. 3 (2006), s. 571-583 ISSN 0022-4715 Institutional research plan: CEZ:AV0Z10100520 Keywords : H-theorem, entropy * irreversible equations Subject RIV: BE - Theoretical Physics Impact factor: 1.437, year: 2006

  4. Student Research Project: Goursat's Other Theorem

    Petrillo, Joseph

    2009-01-01

    In an elementary undergraduate abstract algebra or group theory course, a student is introduced to a variety of methods for constructing and deconstructing groups. What seems to be missing from contemporary texts and syllabi is a theorem, first proved by Edouard Jean-Baptiste Goursat (1858-1936) in 1889, which completely describes the subgroups of…

  5. On Viviani's Theorem and Its Extensions

    Abboud, Elias

    2010-01-01

    Viviani's theorem states that the sum of distances from any point inside an equilateral triangle to its sides is constant. Here, in an extension of this result, we show, using linear programming, that any convex polygon can be divided into parallel line segments on which the sum of the distances to the sides of the polygon is constant. Let us say…

  6. The Embedding Theorems of Whitney and Nash

    We begin by briefly motivating the idea of amanifold and then discuss the embedding theorems of Whitney and Nash that allow us toview these objects inside appropriately large Euclidean spaces. Resonance – Journal of Science Education. Current Issue : Vol. 23, Issue 4. Current Issue Volume 23 | Issue 4. April 2018.

  7. Nash-Williams’ cycle-decomposition theorem

    Thomassen, Carsten

    2016-01-01

    We give an elementary proof of the theorem of Nash-Williams that a graph has an edge-decomposition into cycles if and only if it does not contain an odd cut. We also prove that every bridgeless graph has a collection of cycles covering each edge at least once and at most 7 times. The two results...

  8. General Correlation Theorem for Trinion Fourier Transform

    Bahri, Mawardi

    2017-01-01

    - The trinion Fourier transform is an extension of the Fourier transform in the trinion numbers setting. In this work we derive the correlation theorem for the trinion Fourier transform by using the relation between trinion convolution and correlation definitions in the trinion Fourier transform domains.

  9. ON A LAGUERRE’S THEOREM

    SEVER ANGEL POPESCU

    2015-03-01

    Full Text Available In this note we make some remarks on the classical Laguerre’s theorem and extend it and some other old results of Walsh and Gauss-Lucas to the so called trace series associated with transcendental elements of the completion of the algebraic closure of Q in C, with respect to the spectral norm:

  10. Lagrange’s Four-Square Theorem

    Watase Yasushige

    2015-02-01

    Full Text Available This article provides a formalized proof of the so-called “the four-square theorem”, namely any natural number can be expressed by a sum of four squares, which was proved by Lagrange in 1770. An informal proof of the theorem can be found in the number theory literature, e.g. in [14], [1] or [23].

  11. Anomalous Levinson theorem and supersymmetric quantum mechanics

    Boya, L.J.; Casahorran, J.; Esteve, J.G.

    1993-01-01

    We analyse the symmetry breaking associated to anomalous realization of supersymmetry in the context of SUSY QM. In this case one of the SUSY partners is singular; that leads to peculiar forms of the Levinson theorem relating phase shifts and bound states. Some examples are exhibited; peculiarities include negative energies, incomplete pairing of states and extra phases in scattering. (Author) 8 refs

  12. Another look at the second incompleteness theorem

    Visser, A.

    2017-01-01

    In this paper we study proofs of some general forms of the Second Incompleteness Theorem. These forms conform to the Feferman format, where the proof predicate is xed and the representation of the axiom set varies. We extend the Feferman framework in one important point: we allow the interpretation

  13. Another look at the second incompleteness theorem

    Visser, Albert

    2017-01-01

    In this paper we study proofs of some general forms of the Second Incompleteness Theorem. These forms conform to the Feferman format, where the proof predicate is fixed and the representation of the axiom set varies. We extend the Feferman framework in one important point: we allow the

  14. On the Leray-Hirsch Theorem for the Lichnerowicz cohomology

    Ait Haddoul, Hassan

    2004-03-01

    The purpose of this paper is to prove the Leray-Hirsch theorem for the Lichnerowicz; cohomology with respect to basic and vertical closed 1-forms. This is a generalization of the Kfirmeth theorem to fiber bundles. (author)

  15. A Note on a Broken-Cycle Theorem for Hypergraphs

    Trinks Martin

    2014-08-01

    Full Text Available Whitney’s Broken-cycle Theorem states the chromatic polynomial of a graph as a sum over special edge subsets. We give a definition of cycles in hypergraphs that preserves the statement of the theorem there

  16. A STRONG OPTIMIZATION THEOREM IN LOCALLY CONVEX SPACES

    程立新; 腾岩梅

    2003-01-01

    This paper presents a geometric characterization of convex sets in locally convex spaces onwhich a strong optimization theorem of the Stegall-type holds, and gives Collier's theorem ofw* Asplund spaces a localized setting.

  17. Factorization theorems in perturbative quantum field theory

    Date, G.D.

    1982-01-01

    This dissertation deals with factorization properties of Green functions and cross-sections in perturbation theory. It consists of two parts. Part I deals with the factorization theorem for the Drell-Yan cross-section. The new approach developed for this purpose is based upon a renormalization group equation with a generalized anomalous dimension. Using an alternate form of factorization for the Drell-Yan cross-section, derived in perturbation theory, a corresponding generalized anomalous dimension is defined, and explicit Feynman rules for its calculation are given. The resultant renormalization group equation is solved by a formal solution which is exhibited explicitly. Simple, explicit calculations are performed which verify Mueller's conjecture for the recovery of the usual parton model results for the Drell-Yan cross-section. The approach developed in this work offers a general framework to analyze the role played by the group factors in the cancellation of the soft divergences, and study their influence on the asymptotic behavior. Part II deals with factorization properties of the Green functions in position space. In this part, a Landau equation analysis is carried out for the singularities of the position space Green fucntions, in perturbation theory with the theta 4 interaction Lagrangian. A physical picture interpretation is given for the corresponding Landau equations. It is used to suggest a light-cone expansion. Using a power counting method, a formal derivation of the light-cone expansion for the two point function, the three point function and a product of two currents, is given without assuming a short distance expansion. Possible extensions to other theories is also considered

  18. DISCRETE FIXED POINT THEOREMS AND THEIR APPLICATION TO NASH EQUILIBRIUM

    Sato, Junichi; Kawasaki, Hidefumi

    2007-01-01

    Fixed point theorems are powerful tools in not only mathematics but also economic. In some economic problems, we need not real-valued but integer-valued equilibriums. However, classical fixed point theorems guarantee only real-valued equilibria. So we need discrete fixed point theorems in order to get discrete equilibria. In this paper, we first provide discrete fixed point theorems, next apply them to a non-cooperative game and prove the existence of a Nash equilibrium of pure strategies.

  19. Theorems of Tarski's Undefinability and Godel's Second Incompleteness - Computationally

    Salehi, Saeed

    2015-01-01

    We present a version of Godel's Second Incompleteness Theorem for recursively enumerable consistent extensions of a fixed axiomatizable theory, by incorporating some bi-theoretic version of the derivability conditions (first discussed by M. Detlefsen 2001). We also argue that Tarski's theorem on the Undefinability of Truth is Godel's First Incompleteness Theorem relativized to definable oracles; here a unification of these two theorems is given.

  20. An Improved Algorithm Based on Minimum Spanning Tree for Multi-scale Segmentation of Remote Sensing Imagery

    LI Hui

    2015-07-01

    Full Text Available As the basis of object-oriented information extraction from remote sensing imagery,image segmentation using multiple image features,exploiting spatial context information, and by a multi-scale approach are currently the research focuses. Using an optimization approach of the graph theory, an improved multi-scale image segmentation method is proposed. In this method, the image is applied with a coherent enhancement anisotropic diffusion filter followed by a minimum spanning tree segmentation approach, and the resulting segments are merged with reference to a minimum heterogeneity criterion.The heterogeneity criterion is defined as a function of the spectral characteristics and shape parameters of segments. The purpose of the merging step is to realize the multi-scale image segmentation. Tested on two images, the proposed method was visually and quantitatively compared with the segmentation method employed in the eCognition software. The results show that the proposed method is effective and outperforms the latter on areas with subtle spectral differences.

  1. The Interpretability of Inconsistency: Feferman's Theorem and Related Results

    Visser, Albert

    This paper is an exposition of Feferman's Theorem concerning the interpretability of inconsistency and of further insights directly connected to this result. Feferman's Theorem is a strengthening of the Second Incompleteness Theorem. It says, in metaphorical paraphrase, that it is not just the case

  2. The Interpretability of Inconsistency: Feferman's Theorem and Related Results

    Visser, Albert

    2014-01-01

    This paper is an exposition of Feferman's Theorem concerning the interpretability of inconsistency and of further insights directly connected to this result. Feferman's Theorem is a strengthening of the Second Incompleteness Theorem. It says, in metaphorical paraphrase, that it is not just the case

  3. On Comparison Theorems for Conformable Fractional Differential Equations

    Mehmet Zeki Sarikaya

    2016-10-01

    Full Text Available In this paper the more general comparison theorems for conformable fractional differential equations is proposed and tested. Thus we prove some inequalities for conformable integrals by using the generalization of Sturm's separation and Sturm's comparison theorems. The results presented here would provide generalizations of those given in earlier works. The numerical example is also presented to verify the proposed theorem.

  4. COMPARISON THEOREMS AND APPLICATIONS OF OSCILLATION OF NEUTRAL DIFFERENTIAL EQUATIONS

    燕居让

    1991-01-01

    We first establish comparison theorems of the oscillation for a higher-order neutral delaydifferential equation. By these comparison theorems, the criterion of oscillation propertiesof neutral delay differential equation is reduced to that of nonneutral delay differential equa-tion, from which we give a series of oscillation theorems for neutral delay differentialequation.

  5. A generalization of the virial theorem for strongly singular potentials

    Gesztesy, F.; Pittner, L.

    1978-09-01

    Using scale transformations the authors prove a generalization of the virial theorem for the eigenfunctions of non-relativistic Schroedinger Hamiltonians which are defined as the Friedrichs extension of strongly singular differential operators. The theorem also applies to situations where the ground state has divergent kinetic and potential energy and thus the usual version of the virial theorem becomes meaningless. (Auth.)

  6. No-go theorems for the minimization of potentials

    Chang, D.; Kumar, A.

    1985-01-01

    Using a theorem in linear algebra, we prove some no-go theorems in the minimization of potentials related to the problem of symmetry breaking. Some applications in the grand unified model building are mentioned. Another application of the algebraic theorem is also included to demonstrate its usefulness

  7. Goedel incompleteness theorems and the limits of their applicability. I

    Beklemishev, Lev D

    2011-01-01

    This is a survey of results related to the Goedel incompleteness theorems and the limits of their applicability. The first part of the paper discusses Goedel's own formulations along with modern strengthenings of the first incompleteness theorem. Various forms and proofs of this theorem are compared. Incompleteness results related to algorithmic problems and mathematically natural examples of unprovable statements are discussed. Bibliography: 68 titles.

  8. Impact of Rate Design Alternatives on Residential Solar Customer Bills. Increased Fixed Charges, Minimum Bills and Demand-based Rates

    Bird, Lori [National Renewable Energy Lab. (NREL), Golden, CO (United States); Davidson, Carolyn [National Renewable Energy Lab. (NREL), Golden, CO (United States); McLaren, Joyce [National Renewable Energy Lab. (NREL), Golden, CO (United States); Miller, John [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-09-01

    With rapid growth in energy efficiency and distributed generation, electric utilities are anticipating stagnant or decreasing electricity sales, particularly in the residential sector. Utilities are increasingly considering alternative rates structures that are designed to recover fixed costs from residential solar photovoltaic (PV) customers with low net electricity consumption. Proposed structures have included fixed charge increases, minimum bills, and increasingly, demand rates - for net metered customers and all customers. This study examines the electricity bill implications of various residential rate alternatives for multiple locations within the United States. For the locations analyzed, the results suggest that residential PV customers offset, on average, between 60% and 99% of their annual load. However, roughly 65% of a typical customer's electricity demand is non-coincidental with PV generation, so the typical PV customer is generally highly reliant on the grid for pooling services.

  9. Levinson theorem for Dirac particles in n dimensions

    Jiang Yu

    2005-01-01

    We study the Levinson theorem for a Dirac particle in an n-dimensional central field by use of the Green function approach, based on an analysis of the n-dimensional radial Dirac equation obtained through a simple algebraic derivation. We show that the zero-momentum phase shifts are related to the number of bound states with |E|< m plus the number of half-bound states of zero momenta--i.e., |E|=m--which are denoted by finite, but not square-integrable, wave functions

  10. Determination of the minimum size of a statistical representative volume element from a fibre-reinforced composite based on point pattern statistics

    Hansen, Jens Zangenberg; Brøndsted, Povl

    2013-01-01

    In a previous study, Trias et al. [1] determined the minimum size of a statistical representative volume element (SRVE) of a unidirectional fibre-reinforced composite primarily based on numerical analyses of the stress/strain field. In continuation of this, the present study determines the minimu...... size of an SRVE based on a statistical analysis on the spatial statistics of the fibre packing patterns found in genuine laminates, and those generated numerically using a microstructure generator. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved....

  11. A Meinardus Theorem with Multiple Singularities

    Granovsky, Boris L.; Stark, Dudley

    2012-09-01

    Meinardus proved a general theorem about the asymptotics of the number of weighted partitions, when the Dirichlet generating function for weights has a single pole on the positive real axis. Continuing (Granovsky et al., Adv. Appl. Math. 41:307-328, 2008), we derive asymptotics for the numbers of three basic types of decomposable combinatorial structures (or, equivalently, ideal gas models in statistical mechanics) of size n, when their Dirichlet generating functions have multiple simple poles on the positive real axis. Examples to which our theorem applies include ones related to vector partitions and quantum field theory. Our asymptotic formula for the number of weighted partitions disproves the belief accepted in the physics literature that the main term in the asymptotics is determined by the rightmost pole.

  12. H-theorem in quantum physics.

    Lesovik, G B; Lebedev, A V; Sadovskyy, I A; Suslov, M V; Vinokur, V M

    2016-09-12

    Remarkable progress of quantum information theory (QIT) allowed to formulate mathematical theorems for conditions that data-transmitting or data-processing occurs with a non-negative entropy gain. However, relation of these results formulated in terms of entropy gain in quantum channels to temporal evolution of real physical systems is not thoroughly understood. Here we build on the mathematical formalism provided by QIT to formulate the quantum H-theorem in terms of physical observables. We discuss the manifestation of the second law of thermodynamics in quantum physics and uncover special situations where the second law can be violated. We further demonstrate that the typical evolution of energy-isolated quantum systems occurs with non-diminishing entropy.

  13. Asymptotic twistor theory and the Kerr theorem

    Newman, Ezra T

    2006-01-01

    We first review asymptotic twistor theory with its real subspace of null asymptotic twistors: a five-dimensional CR manifold. This is followed by a description of the Kerr theorem (the identification of shear-free null congruences, in Minkowski space, with the zeros of holomorphic functions of three variables) and an asymptotic version of the Kerr theorem that produces regular asymptotically shear-free null geodesic congruences in arbitrary asymptotically flat Einstein or Einstein-Maxwell spacetimes. A surprising aspect of this work is the role played by analytic curves in H-space, each curve generating an asymptotically flat null geodesic congruence. Also there is a discussion of the physical space realizations of the two associated five- and three-dimensional CR manifolds

  14. The self-normalized Donsker theorem revisited

    Parczewski, Peter

    2016-01-01

    We extend the Poincar\\'{e}--Borel lemma to a weak approximation of a Brownian motion via simple functionals of uniform distributions on n-spheres in the Skorokhod space $D([0,1])$. This approach is used to simplify the proof of the self-normalized Donsker theorem in Cs\\"{o}rg\\H{o} et al. (2003). Some notes on spheres with respect to $\\ell_p$-norms are given.

  15. The untyped stack calculus and Bohm's theorem

    Alberto Carraro

    2013-03-01

    Full Text Available The stack calculus is a functional language in which is in a Curry-Howard correspondence with classical logic. It enjoys confluence but, as well as Parigot's lambda-mu, does not admit the Bohm Theorem, typical of the lambda-calculus. We present a simple extension of stack calculus which is for the stack calculus what Saurin's Lambda-mu is for lambda-mu.

  16. Gauge Invariance and the Goldstone Theorem

    Guralnik, Gerald S.

    This paper was originally created for and printed in the "Proceedings of seminar on unified theories of elementary particles" held in Feldafing, Germany from July 5 to 16, 1965 under the auspices of the Max-Planck-Institute for Physics and Astrophysics in Munich. It details and expands upon the 1964 Guralnik, Hagen, and Kibble paper demonstrating that the Goldstone theorem does not require physical zero mass particles in gauge theories.

  17. A remark on three-surface theorem

    Lu Zhujia

    1991-01-01

    The three-surface theorem for uniformly elliptic differential inequalities with nonpositive coefficient of zero-order term in some domain D is included in R n becomes trivial if the maximum of u on two separate boundary surface of D is nonpositive. We give a method in this paper for obtaining a nontrivial estimate of the maximum of u on a family of closed surfaces. (author). 2 refs

  18. Asynchronous networks: modularization of dynamics theorem

    Bick, Christian; Field, Michael

    2017-02-01

    Building on the first part of this paper, we develop the theory of functional asynchronous networks. We show that a large class of functional asynchronous networks can be (uniquely) represented as feedforward networks connecting events or dynamical modules. For these networks we can give a complete description of the network function in terms of the function of the events comprising the network: the modularization of dynamics theorem. We give examples to illustrate the main results.

  19. Fractional and integer charges from Levinson's theorem

    Farhi, E.; Graham, N.; Jaffe, R.L.; Weigel, H.

    2001-01-01

    We compute fractional and integer fermion quantum numbers of static background field configurations using phase shifts and Levinson's theorem. By extending fermionic scattering theory to arbitrary dimensions, we implement dimensional regularization in a (1+1)-dimensional gauge theory. We demonstrate that this regularization procedure automatically eliminates the anomaly in the vector current that a naive regulator would produce. We also apply these techniques to bag models in one and three dimensions

  20. Theorems for asymptotic safety of gauge theories

    Bond, Andrew D.; Litim, Daniel F. [University of Sussex, Department of Physics and Astronomy, Brighton (United Kingdom)

    2017-06-15

    We classify the weakly interacting fixed points of general gauge theories coupled to matter and explain how the competition between gauge and matter fluctuations gives rise to a rich spectrum of high- and low-energy fixed points. The pivotal role played by Yukawa couplings is emphasised. Necessary and sufficient conditions for asymptotic safety of gauge theories are also derived, in conjunction with strict no go theorems. Implications for phase diagrams of gauge theories and physics beyond the Standard Model are indicated. (orig.)

  1. Optical theorem, depolarization and vector tomography

    Toperverg, B.P.

    2003-01-01

    A law of the total flux conservation is formulated in the form of the optical theorem. It is employed to explicitly derive equations for the description of the neutron polarization within the range of the direct beam defined by its angular divergence. General considerations are illustrated by calculations using the Born and Eikonal approximations. Results are briefly discussed as applied to Larmor-Fourier tomography

  2. Central limit theorem and deformed exponentials

    Vignat, C; Plastino, A

    2007-01-01

    The central limit theorem (CLT) can be ranked among the most important ones in probability theory and statistics and plays an essential role in several basic and applied disciplines, notably in statistical thermodynamics. We show that there exists a natural extension of the CLT from exponentials to so-called deformed exponentials (also denoted as q-Gaussians). Our proposal applies exactly in the usual conditions in which the classical CLT is used. (fast track communication)

  3. Convergence theorems for quasi-contractive mappings

    Chidume, C.E.

    1992-01-01

    It is proved that each of two well known fixed point iteration methods (the Mann and Ishikawa iteration methods) converges strongly, without any compactness assumption on the domain of the map, to the unique fixed point of a quasi-contractive map in real Banach spacers with property (U, α, m+1, m). These Banach spaces include the L p (or l p ) spaces, p ≥ 2. Our theorems generalize important known results. (author). 29 refs

  4. Optical theorem for heavy-ion scattering

    Schwarzschild, A.Z.; Auerbach, E.H.; Fuller, R.C.; Kahana, S.

    1976-01-01

    An heuristic derivation is given of an equivalent of the optical theorem stated in the charged situation with the remainder or nuclear elastic scattering amplitude defined as a difference of elastic and Coulomb amplitudes. To test the detailed behavior of this elastic scattering amplitude and the cross section, calculations were performed for elastic scattering of 18 O + 58 Ni, 136 Xe + 209 Bi, 84 Kr + 208 Pb, and 11 B + 26 Mg at 63.42 to 114 MeV

  5. Applications of Wck's theorem, ch. 17

    Brussaard, P.J.; Glaudemans, P.W.M.

    1977-01-01

    Wick's theorem is introduced and used to write the many-body Hamiltonian in a selfconsistent basis. The terms of a perturbation expansion are evaluated with the use of the second-quantization formalism.The correspondence with Feyman diagrams is demonstrated. For some nuclei a description in terms of particle-hole configurations is quite convenient. The simplest case, i.e. one-particle, one-hole states, is treated

  6. Theorem Proving In Higher Order Logics

    Carreno, Victor A. (Editor); Munoz, Cesar A.; Tahar, Sofiene

    2002-01-01

    The TPHOLs International Conference serves as a venue for the presentation of work in theorem proving in higher-order logics and related areas in deduction, formal specification, software and hardware verification, and other applications. Fourteen papers were submitted to Track B (Work in Progress), which are included in this volume. Authors of Track B papers gave short introductory talks that were followed by an open poster session. The FCM 2002 Workshop aimed to bring together researchers working on the formalisation of continuous mathematics in theorem proving systems with those needing such libraries for their applications. Many of the major higher order theorem proving systems now have a formalisation of the real numbers and various levels of real analysis support. This work is of interest in a number of application areas, such as formal methods development for hardware and software application and computer supported mathematics. The FCM 2002 consisted of three papers, presented by their authors at the workshop venue, and one invited talk.

  7. The universality of the Carnot theorem

    Gonzalez-Ayala, Julian; Angulo-Brown, F

    2013-01-01

    It is common in many thermodynamics textbooks to illustrate the Carnot theorem through the use of diverse state equations for gases, paramagnets, and other simple thermodynamic systems. As is well known, the universality of the Carnot efficiency is easily demonstrated in a temperature–entropy diagram, which means that η C is independent of the working substance. In this paper we remark that the universality of the Carnot theorem goes beyond conventional state equations, and is fulfilled by gas state equations that do not correspond to an ideal gas in the dilution limit, namely V → ∞. Some of these unconventional state equations have certain thermodynamic ‘anomalies’ that nonetheless do not forbid them from obeying the Carnot theorem. We discuss how this very general behaviour arises from Maxwell relations, which are connected with a geometrical property expressed through preserving area transformations. A rule is proposed to calculate the Maxwell relations associated with a thermodynamic system by using the preserving area relationships. In this way it is possible to calculate the number of possible preserving area mappings by giving the number of possible Jacobian identities between all pairs of thermodynamic variables included in the corresponding Gibbs equation. This paper is intended for undergraduates and specialists in thermodynamics and related areas. (paper)

  8. Soft theorems from conformal field theory

    Lipstein, Arthur E.

    2015-01-01

    Strominger and collaborators recently proposed that soft theorems for gauge and gravity amplitudes can be interpreted as Ward identities of a 2d CFT at null infinity. In this paper, we will consider a specific realization of this CFT known as ambitwistor string theory, which describes 4d Yang-Mills and gravity with any amount of supersymmetry. Using 4d ambtwistor string theory, we derive soft theorems in the form of an infinite series in the soft momentum which are valid to subleading order in gauge theory and sub-subleading order in gravity. Furthermore, we describe how the algebra of soft limits can be encoded in the braiding of soft vertex operators on the worldsheet and point out a simple relation between soft gluon and soft graviton vertex operators which suggests an interesting connection to color-kinematics duality. Finally, by considering ambitwistor string theory on a genus one worldsheet, we compute the 1-loop correction to the subleading soft graviton theorem due to infrared divergences.

  9. Sequence-Based Prediction of RNA-Binding Proteins Using Random Forest with Minimum Redundancy Maximum Relevance Feature Selection

    Xin Ma

    2015-01-01

    Full Text Available The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR method, followed by incremental feature selection (IFS. We incorporated features of conjoint triad features and three novel features: binding propensity (BP, nonbinding propensity (NBP, and evolutionary information combined with physicochemical properties (EIPP. The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient. High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.

  10. Observer-based attitude controller for lifting re-entry vehicle with non-minimum phase property

    Wenming Nie

    2017-05-01

    Full Text Available This article concentrates on the attitude control problem for the lifting re-entry vehicle with non-minimum phase property. A novel attitude control method is proposed for this kind of lifting re-entry vehicle without assuming the internal dynamics to be measurable. First, an internal dynamics extended state observer is developed to deal with the unmeasurable problem of the internal dynamics. And then, the control scheme which adopts output feedback method is proposed by modifying the traditional output redefinition technique with internal dynamics extended state observer. This control scheme only requires the system output to be measurable, and it can still stabilize the unstable internal dynamics and track attitude commands. Besides, because of the inherent property of extended state observer in rejecting uncertainties and disturbances, the control precision of the proposed controller is higher than the controller designed with traditional output redefinition technique. Finally, the effectiveness and robustness of the proposed attitude controller are demonstrated by the simulation results.

  11. A Resting-State Brain Functional Network Study in MDD Based on Minimum Spanning Tree Analysis and the Hierarchical Clustering

    Xiaowei Li

    2017-01-01

    Full Text Available A large number of studies demonstrated that major depressive disorder (MDD is characterized by the alterations in brain functional connections which is also identifiable during the brain’s “resting-state.” But, in the present study, the approach of constructing functional connectivity is often biased by the choice of the threshold. Besides, more attention was paid to the number and length of links in brain networks, and the clustering partitioning of nodes was unclear. Therefore, minimum spanning tree (MST analysis and the hierarchical clustering were first used for the depression disease in this study. Resting-state electroencephalogram (EEG sources were assessed from 15 healthy and 23 major depressive subjects. Then the coherence, MST, and the hierarchical clustering were obtained. In the theta band, coherence analysis showed that the EEG coherence of the MDD patients was significantly higher than that of the healthy controls especially in the left temporal region. The MST results indicated the higher leaf fraction in the depressed group. Compared with the normal group, the major depressive patients lost clustering in frontal regions. Our findings suggested that there was a stronger brain interaction in the MDD group and a left-right functional imbalance in the frontal regions for MDD controls.

  12. Wide-area measurement system-based supervision of protection schemes with minimum number of phasor measurement units.

    Gajare, Swaroop; Rao, J Ganeswara; Naidu, O D; Pradhan, Ashok Kumar

    2017-08-13

    Cascade tripping of power lines triggered by maloperation of zone-3 relays during stressed system conditions, such as load encroachment, power swing and voltage instability, has led to many catastrophic power failures worldwide, including Indian blackouts in 2012. With the introduction of wide-area measurement systems (WAMS) into the grids, real-time monitoring of transmission network condition is possible. A phasor measurement unit (PMU) sends time-synchronized data to a phasor data concentrator, which can provide a control signal to substation devices. The latency associated with the communication system makes WAMS suitable for a slower form of protection. In this work, a method to identify the faulted line using synchronized data from strategic PMU locations is proposed. Subsequently, a supervisory signal is generated for specific relays in the system for any disturbance or stressed condition. For a given system, an approach to decide the strategic locations for PMU placement is developed, which can be used for determining the minimum number of PMUs required for application of the method. The accuracy of the scheme is tested for faults during normal and stressed conditions in a New England 39-bus system simulated using EMTDC/PSCAD software. With such a strategy, maloperation of relays can be averted in many situations and thereby blackouts/large-scale disturbances can be prevented.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  13. Four theorems on the psychometric function.

    May, Keith A; Solomon, Joshua A

    2013-01-01

    In a 2-alternative forced-choice (2AFC) discrimination task, observers choose which of two stimuli has the higher value. The psychometric function for this task gives the probability of a correct response for a given stimulus difference, Δx. This paper proves four theorems about the psychometric function. Assuming the observer applies a transducer and adds noise, Theorem 1 derives a convenient general expression for the psychometric function. Discrimination data are often fitted with a Weibull function. Theorem 2 proves that the Weibull "slope" parameter, β, can be approximated by β(Noise) x β(Transducer), where β(Noise) is the β of the Weibull function that fits best to the cumulative noise distribution, and β(Transducer) depends on the transducer. We derive general expressions for β(Noise) and β(Transducer), from which we derive expressions for specific cases. One case that follows naturally from our general analysis is Pelli's finding that, when d' ∝ (Δx)(b), β ≈ β(Noise) x b. We also consider two limiting cases. Theorem 3 proves that, as sensitivity improves, 2AFC performance will usually approach that for a linear transducer, whatever the actual transducer; we show that this does not apply at signal levels where the transducer gradient is zero, which explains why it does not apply to contrast detection. Theorem 4 proves that, when the exponent of a power-function transducer approaches zero, 2AFC performance approaches that of a logarithmic transducer. We show that the power-function exponents of 0.4-0.5 fitted to suprathreshold contrast discrimination data are close enough to zero for the fitted psychometric function to be practically indistinguishable from that of a log transducer. Finally, Weibull β reflects the shape of the noise distribution, and we used our results to assess the recent claim that internal noise has higher kurtosis than a Gaussian. Our analysis of β for contrast discrimination suggests that, if internal noise is stimulus

  14. Four theorems on the psychometric function.

    Keith A May

    Full Text Available In a 2-alternative forced-choice (2AFC discrimination task, observers choose which of two stimuli has the higher value. The psychometric function for this task gives the probability of a correct response for a given stimulus difference, Δx. This paper proves four theorems about the psychometric function. Assuming the observer applies a transducer and adds noise, Theorem 1 derives a convenient general expression for the psychometric function. Discrimination data are often fitted with a Weibull function. Theorem 2 proves that the Weibull "slope" parameter, β, can be approximated by β(Noise x β(Transducer, where β(Noise is the β of the Weibull function that fits best to the cumulative noise distribution, and β(Transducer depends on the transducer. We derive general expressions for β(Noise and β(Transducer, from which we derive expressions for specific cases. One case that follows naturally from our general analysis is Pelli's finding that, when d' ∝ (Δx(b, β ≈ β(Noise x b. We also consider two limiting cases. Theorem 3 proves that, as sensitivity improves, 2AFC performance will usually approach that for a linear transducer, whatever the actual transducer; we show that this does not apply at signal levels where the transducer gradient is zero, which explains why it does not apply to contrast detection. Theorem 4 proves that, when the exponent of a power-function transducer approaches zero, 2AFC performance approaches that of a logarithmic transducer. We show that the power-function exponents of 0.4-0.5 fitted to suprathreshold contrast discrimination data are close enough to zero for the fitted psychometric function to be practically indistinguishable from that of a log transducer. Finally, Weibull β reflects the shape of the noise distribution, and we used our results to assess the recent claim that internal noise has higher kurtosis than a Gaussian. Our analysis of β for contrast discrimination suggests that, if internal noise is

  15. [Case-Mix of hospital emergencies in the Andalusian Health Service based on the 2012 Minimum Data Set. Spain].

    Goicoechea Salazar, Juan Antonio; Nieto García, María Adoración; Laguna Téllez, Antonio; Larrocha Mata, Daniel; Canto Casasola, Vicente David; Murillo Cabezas, Francisco

    2013-01-01

    The implementation of digital health records in emergency departments (ED) in hospitals in the Andalusian Health Service and the development of an automatic encoder for this area have allowed us to establish a Minimum Data Set for Emergencies (MDS-ED). The aim of this article is to describe the case mix of hospital EDs using various dimensions contained in the MDS-ED. 3.235.600 hospital emergency records in 2012 were classified in clinical categories from the ICD-9-CM codes generated by the automatic encoder. Operating rules to obtain response time and length of stay were defined. A descriptive analysis was carried out to obtain demographic and chronological indicators as well as hospitalization, return and death rates and response time and length of stay in the Eds. Women generated 54,26% of all occurrences and their average age (39,98 years) was higher than men's (37,61). Paediatric emergencies accounted for 21,49% of the total. The peak hours were from 10:00 to 13:00 and from 16:00 to 17:00. Patients who did not undergo observation (92,67%) remained in the ED an average of 153 minutes. Injuries and poisoning, respiratory diseases, musculoskeletal diseases and symptoms and signs generated over 50% of all visits. 79.191 cases of chest pain, 28.741 episodes of heart failure and 27.989 episodes of serious infections were identified among the most relevant disorders. The MDS-ED makes it possible to address systematically the analysis of hospital emergencies by identifying the activity developed, the case-mix attended, the response times, the time spent in ED and the quality of the care.

  16. Central Limit Theorem: New SOCR Applet and Demonstration Activity

    Dinov, Ivo D.; Christou, Nicolas; Sanchez, Juana

    2011-01-01

    Modern approaches for information technology based blended education utilize a variety of novel instructional, computational and network resources. Such attempts employ technology to deliver integrated, dynamically linked, interactive content and multifaceted learning environments, which may facilitate student comprehension and information retention. In this manuscript, we describe one such innovative effort of using technological tools for improving student motivation and learning of the theory, practice and usability of the Central Limit Theorem (CLT) in probability and statistics courses. Our approach is based on harnessing the computational libraries developed by the Statistics Online Computational Resource (SOCR) to design a new interactive Java applet and a corresponding demonstration activity that illustrate the meaning and the power of the CLT. The CLT applet and activity have clear common goals; to provide graphical representation of the CLT, to improve student intuition, and to empirically validate and establish the limits of the CLT. The SOCR CLT activity consists of four experiments that demonstrate the assumptions, meaning and implications of the CLT and ties these to specific hands-on simulations. We include a number of examples illustrating the theory and applications of the CLT. Both the SOCR CLT applet and activity are freely available online to the community to test, validate and extend (Applet: http://www.socr.ucla.edu/htmls/SOCR_Experiments.html and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem). PMID:21833159

  17. Minimum Wages and Poverty

    Fields, Gary S.; Kanbur, Ravi

    2005-01-01

    Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...

  18. The effect of minimum impact education on visitor spatial behavior in parks and protected areas: An experimental investigation using GPS-based tracking.

    Kidd, Abigail M; Monz, Christopher; D'Antonio, Ashley; Manning, Robert E; Reigner, Nathan; Goonan, Kelly A; Jacobi, Charles

    2015-10-01

    The unmanaged impacts of recreation and tourism can often result in unacceptable changes in resource conditions and quality of the visitor experience. Minimum impact visitor education programs aim to reduce the impacts of recreation by altering visitor behaviors. Specifically, education seeks to reduce impacts resulting from lack of knowledge both about the consequences of one's actions and impact-minimizing best practices. In this study, three different on-site minimum impact education strategies ("treatments") and a control condition were applied on the trails and summit area of Sargent Mountain in Acadia National Park, Maine. Treatment conditions were designed to encourage visitors to stay on marked trails and minimize off-trail travel. Treatments included a message delivered via personal contact, and both an ecological-based message and an amenity-based message posted on signs located alongside the trail. A control condition of current trail markings and directional signs was also assessed. The efficacy of the messaging was evaluated through the use of Global Positioning System (GPS) tracking of visitor spatial behavior on/off trails. Spatial analysis of GPS tracks revealed statistically significant differences among treatments, with the personal contact treatment yielding significantly less dispersion of visitors on the mountain summit. Results also indicate that the signs deployed in the study were ineffective at limiting off-trail use beyond what can be accomplished with trail markers and directional signs. These findings suggest that personal contact by a uniformed ranger or volunteer may be the most effective means of message delivery for on-site minimum impact education. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Minimum Contradictions Physics and Propulsion via Superconducting Magnetic Field Trapping

    Nassikas, A. A.

    2010-01-01

    All theories are based on Axioms which obviously are arbitrary; e.g. SRT, GRT, QM Axioms. Instead of manipulating the experience through a new set of Arbitrary Axioms it would be useful to search, through a basic tool that we have at our disposal i.e. Logic Analysis, for a set of privileged axioms. Physics theories, beyond their particular axioms, can be restated through the basic communication system as consisting of the Classical Logic, the Sufficient Reason Principle and the Anterior-Posterior Axiom. By means of a theorem this system can be proven as contradictory. The persistence in logic is the way for a set of privileged axioms to be found. This can be achieved on the basis of the Claim for Minimum Contradictions. Further axioms beyond the ones of the basic communications imply further contradictions. Thus, minimum contradictions can be achieved when things are described through anterior-posterior terms; due to existing contradictions through stochastic space-time, which is matter itself, described through a Ψ wave function and distributed, in a Hypothetical Measuring Field (HMF), through the density probability function P(r, t). On this basis, a space-time QM is obtained and this QM is a unified theory satisfying the requirements of quantum gravity. There are both mass-gravitational space-time (g) regarded as real and charge-electromagnetic (em) space-time that could be regarded as imaginary. In a closed system energy conversion-conservation and momentum action take place through photons, which can be regarded either as (g) or (em) space-time formation whose rest mass is equal to zero. Universe Evolution is described through the interaction of the gravitational (g) with the electromagnetic (em) space-time-matter field and not through any other entities. This methodology implies that there is no need for dark matter. An experiment is proposed relative to the (g)+(em) interaction based on Superconducting Magnetic Field Trapping to validate this approach.

  20. Over-Sampling Codebook-Based Hybrid Minimum Sum-Mean-Square-Error Precoding for Millimeter-Wave 3D-MIMO

    Mao, Jiening; Gao, Zhen; Wu, Yongpeng; Alouini, Mohamed-Slim

    2018-01-01

    Hybrid precoding design is challenging for millimeter-wave (mmWave) massive MIMO. Most prior hybrid precoding schemes are designed to maximize the sum spectral efficiency (SSE), while seldom investigate the bit-error-rate (BER). Therefore, this letter designs an over-sampling codebook (OSC)-based hybrid minimum sum-mean-square-error (min-SMSE) precoding to optimize the BER. Specifically, given the effective baseband channel consisting of the real channel and analog precoding, we first design the digital precoder/combiner based on min-SMSE criterion to optimize the BER. To further reduce the SMSE between the transmit and receive signals, we propose an OSC-based joint analog precoder/combiner (JAPC) design. Simulation results show that the proposed scheme can achieve the better performance than its conventional counterparts.

  1. Over-Sampling Codebook-Based Hybrid Minimum Sum-Mean-Square-Error Precoding for Millimeter-Wave 3D-MIMO

    Mao, Jiening

    2018-05-23

    Abstract: Hybrid precoding design is challenging for millimeter-wave (mmWave) massive MIMO. Most prior hybrid precoding schemes are designed to maximize the sum spectral efficiency (SSE), while seldom investigate the bit-error-rate (BER). Therefore, this letter designs an over-sampling codebook (OSC)-based hybrid minimum sum-mean-square-error (min-SMSE) precoding to optimize the BER. Specifically, given the effective baseband channel consisting of the real channel and analog precoding, we first design the digital precoder/combiner based on min-SMSE criterion to optimize the BER. To further reduce the SMSE between the transmit and receive signals, we propose an OSC-based joint analog precoder/combiner (JAPC) design. Simulation results show that the proposed scheme can achieve the better performance than its conventional counterparts.

  2. Discover the pythagorean theorem using interactive multimedia learning

    Adhitama, I.; Sujadi, I.; Pramudya, I.

    2018-04-01

    In learning process students are required to play an active role in learning. They do not just accept the concept directly from teachers, but also build their own knowledge so that the learning process becomes more meaningful. Based on the observation, when learning Pythagorean theorem, students got difficulty on determining hypotenuse. One of the solution to solve this problem is using an interactive multimedia learning. This article aims to discuss the interactive multimedia as learning media for students. This was a Research and Development (R&D) by using ADDIE model of development. The results obtained was multimedia which was developed proper for students as learning media. Besides, on Phytagorian theorem learning activity we also compare Discovery Learning (DL) model with interactive multimedia and DL without interactive multimedia, and obtained that DL with interactive gave positive effect better than DL without interactive multimedia. It was also obtainde that interactive multimedia can attract and increase the interest ot the students on learning math. Therefore, the use of interactive multimedia on DL procees can improve student learning achievement.

  3. A theorem on the methodology of positive economics

    Eduardo Pol

    2015-12-01

    Full Text Available It has long been recognized that the Milton Friedman’s 1953 essay on economic methodology (or F53, for short displays open-ended unclarities. For example, the notion of “unrealistic assumption” plays a role of absolutely fundamental importance in his methodological framework, but the term itself was never unambiguously defined in any of the Friedman’s contributions to the economics discipline. As a result, F53 is appealing and liberating because the choice of premises in economic theorizing is not subject to any constraints concerning the degree of realisticness (or unrealisticness of the assumptions. The question: “Does the methodology of positive economics prevent the overlapping between economics and science fiction?” comes very naturally, indeed. In this paper, we show the following theorem: the Friedman’s methodology of positive economics does not exclude science fiction. This theorem is a positive statement, and consequently, it does not involve value judgements. However, it throws a wrench on the formulation of economic policy based on surreal models.

  4. Further investigation on the precise formulation of the equivalence theorem

    He, H.; Kuang, Y.; Li, X.

    1994-01-01

    Based on a systematic analysis of the renormalization schemes in the general R ξ gauge, the precise formulation of the equivalence theorem for longitudinal weak boson scatterings is given both in the SU(2) L Higgs theory and in the realistic SU(2)xU(1) electroweak theory to all orders in the perturbation for an arbitrary Higgs boson mass m H . It is shown that there is generally a renormalization-scheme- and ξ-dependent modification factor C mod and a simple formula for C mod is obtained. Furthermore, a convenient particular renormalization scheme is proposed in which C mod is exactly unity. Results of C mod in other currently used schemes are also discussed especially on their ξ and m H dependence through explicit one-loop calculations. It is shown that in some currently used schemes the deviation of C mod from unity and the ξ dependence of C mod are significant even in the large-m H limit. Therefore care should be taken when applying the equivalence theorem

  5. Uniqueness theorem for static phantom wormholes in Einstein–Maxwell-dilaton theory

    Boian Lazov

    2018-03-01

    Full Text Available We prove a uniqueness theorem for completely regular traversable electrically charged wormhole solutions in the Einstein–Maxwell-dilaton gravity with a phantom scalar field and a possible phantom electromagnetic field. In a certain region of the parameter space, determined by the asymptotic values of the scalar field and the lapse function, the regular wormholes are completely specified by their mass, scalar charge and electric charge. The argument is based on the positive energy theorem applied on an appropriate conformally transformed Riemannian space.

  6. From the second gradient operator and second class of integral theorems to Gaussian or spherical mapping invariants

    YIN Ya-jun; WU Ji-ye; HUANG Ke-zhi; FAN Qin-shan

    2008-01-01

    By combining of the second gradient operator, the second class of integral theorems, the Gaussian-curvature-based integral theorems and the Gaussian (or spherical) mapping, a series of invariants or geometric conservation quantities under Gaussian (or spherical) mapping are revealed. From these mapping invariants important transformations between original curved surface and the spherical surface are derived. The potential applications of these invariants and transformations to geometry are discussed.

  7. The implicit function theorem history, theory, and applications

    Krantz, Steven G

    2003-01-01

    The implicit function theorem is part of the bedrock of mathematics analysis and geometry. Finding its genesis in eighteenth century studies of real analytic functions and mechanics, the implicit and inverse function theorems have now blossomed into powerful tools in the theories of partial differential equations, differential geometry, and geometric analysis. There are many different forms of the implicit function theorem, including (i) the classical formulation for Ck functions, (ii) formulations in other function spaces, (iii) formulations for non-smooth function, (iv) formulations for functions with degenerate Jacobian. Particularly powerful implicit function theorems, such as the Nash-Moser theorem, have been developed for specific applications (e.g., the imbedding of Riemannian manifolds). All of these topics, and many more, are treated in the present volume. The history of the implicit function theorem is a lively and complex store, and intimately bound up with the development of fundamental ideas in a...

  8. Some fixed point theorems in fuzzy reflexive Banach spaces

    Sadeqi, I.; Solaty kia, F.

    2009-01-01

    In this paper, we first show that there are some gaps in the fixed point theorems for fuzzy non-expansive mappings which are proved by Bag and Samanta, in [Bag T, Samanta SK. Fixed point theorems on fuzzy normed linear spaces. Inf Sci 2006;176:2910-31; Bag T, Samanta SK. Some fixed point theorems in fuzzy normed linear spaces. Inform Sci 2007;177(3):3271-89]. By introducing the notion of fuzzy and α- fuzzy reflexive Banach spaces, we obtain some results which help us to establish the correct version of fuzzy fixed point theorems. Second, by applying Theorem 3.3 of Sadeqi and Solati kia [Sadeqi I, Solati kia F. Fuzzy normed linear space and it's topological structure. Chaos, Solitons and Fractals, in press] which says that any fuzzy normed linear space is also a topological vector space, we show that all topological version of fixed point theorems do hold in fuzzy normed linear spaces.

  9. On the inverse of the Pomeranchuk theorem

    Nagy, E.

    1977-04-01

    The Pomeranchuk theorem is valid only for bounded total cross sections at infinite energies, and for arbitrarily rising cross sections one cannot prove the zero asymptotic limit of the difference of the particle and antiparticle total cross sections. In the paper the problem is considered from the inverse point of view. It is proved using dispersion relations that if the total cross sections rise with some power of logarithm and the difference of the particle and antiparticle total cross sections remain finite, then the real to imaginary ratios of both the particle and antiparticle forward scattering amplitudes are bounded. (Sz.N.Z.)

  10. Noncommutative gauge theories and Kontsevich's formality theorem

    Jurco, B.; Schupp, P.; Wess, J.

    2001-01-01

    The equivalence of star products that arise from the background field with and without fluctuations and Kontsevich's formality theorem allow an explicitly construction of a map that relates ordinary gauge theory and noncommutative gauge theory (Seiberg-Witten map.) Using noncommutative extra dimensions the construction is extended to noncommutative nonabelian gauge theory for arbitrary gauge groups; as a byproduct we obtain a 'Mini Seiberg-Witten map' that explicitly relates ordinary abelian and nonabelian gauge fields. All constructions are also valid for non-constant B-field, and even more generally for any Poisson tensor

  11. The Invariance and the General CCT Theorems

    Stancu, Alin

    2010-01-01

    The \\begin{it} Invariance Theorem \\end{it} of M. Gerstenhaber and S. D. Schack states that if $\\mathbb{A}$ is a diagram of algebras then the subdivision functor induces a natural isomorphism between the Yoneda cohomologies of the category $\\mathbb{A}$-$\\mathbf{mod}$ and its subdivided category $\\mathbb{A}'$-$\\mathbf{mod}$. In this paper we generalize this result and show that the subdivision functor is a full and faithful functor between two suitable derived categories of $\\mathbb{A}$-$\\mathb...

  12. No-cloning theorem on quantum logics

    Miyadera, Takayuki; Imai, Hideki

    2009-01-01

    This paper discusses the no-cloning theorem in a logicoalgebraic approach. In this approach, an orthoalgebra is considered as a general structure for propositions in a physical theory. We proved that an orthoalgebra admits cloning operation if and only if it is a Boolean algebra. That is, only classical theory admits the cloning of states. If unsharp propositions are to be included in the theory, then a notion of effect algebra is considered. We proved that an atomic Archimedean effect algebra admitting cloning operation is a Boolean algebra. This paper also presents a partial result, indicating a relation between the cloning on effect algebras and hidden variables.

  13. Stone's representation theorem in fuzzy topology

    刘应明; 张德学

    2003-01-01

    In this paper, a complete solution to the problem of Stone's repesentation theorem in fuzzy topology is given for a class of completely distributive lattices. Precisely, it is proved that if L is a frame such that 0 ∈ L is a prime or 1 ∈ L is a coprime, then the category of distributive lattices is dually equivalent to the category of coherent L-locales and that if L is moreover completely distributive, then the category of distributive lattices is dually equivalent to the category of coherent stratified L-topological spaces.

  14. Soft theorems for shift-symmetric cosmologies

    Finelli, Bernardo; Goon, Garrett; Pajer, Enrico; Santoni, Luca

    2018-03-01

    We derive soft theorems for single-clock cosmologies that enjoy a shift symmetry. These so-called consistency conditions arise from a combination of a large diffeomorphism and the internal shift symmetry and fix the squeezed limit of all correlators with a soft scalar mode. As an application, we show that our results reproduce the squeezed bispectrum for ultra-slow-roll inflation, a particular shift-symmetric, nonattractor model which is known to violate Maldacena's consistency relation. Similar results have been previously obtained by Mooij and Palma using background-wave methods. Our results shed new light on the infrared structure of single-clock cosmological spacetimes.

  15. Central limit theorems under special relativity.

    McKeague, Ian W

    2015-04-01

    Several relativistic extensions of the Maxwell-Boltzmann distribution have been proposed, but they do not explain observed lognormal tail-behavior in the flux distribution of various astrophysical sources. Motivated by this question, extensions of classical central limit theorems are developed under the conditions of special relativity. The results are related to CLTs on locally compact Lie groups developed by Wehn, Stroock and Varadhan, but in this special case the asymptotic distribution has an explicit form that is readily seen to exhibit lognormal tail behavior.

  16. Fixed point theorems in spaces and -trees

    Kirk WA

    2004-01-01

    Full Text Available We show that if is a bounded open set in a complete space , and if is nonexpansive, then always has a fixed point if there exists such that for all . It is also shown that if is a geodesically bounded closed convex subset of a complete -tree with , and if is a continuous mapping for which for some and all , then has a fixed point. It is also noted that a geodesically bounded complete -tree has the fixed point property for continuous mappings. These latter results are used to obtain variants of the classical fixed edge theorem in graph theory.

  17. Logic for computer science foundations of automatic theorem proving

    Gallier, Jean H

    2015-01-01

    This advanced text for undergraduate and graduate students introduces mathematical logic with an emphasis on proof theory and procedures for algorithmic construction of formal proofs. The self-contained treatment is also useful for computer scientists and mathematically inclined readers interested in the formalization of proofs and basics of automatic theorem proving. Topics include propositional logic and its resolution, first-order logic, Gentzen's cut elimination theorem and applications, and Gentzen's sharpened Hauptsatz and Herbrand's theorem. Additional subjects include resolution in fir

  18. On Pythagoras Theorem for Products of Spectral Triples

    D'Andrea, Francesco; Martinetti, Pierre

    2013-01-01

    We discuss a version of Pythagoras theorem in noncommutative geometry. Usual Pythagoras theorem can be formulated in terms of Connes' distance, between pure states, in the product of commutative spectral triples. We investigate the generalization to both non pure states and arbitrary spectral triples. We show that Pythagoras theorem is replaced by some Pythagoras inequalities, that we prove for the product of arbitrary (i.e. non-necessarily commutative) spectral triples, assuming only some un...

  19. The direct Flow parametric Proof of Gauss' Divergence Theorem revisited

    Markvorsen, Steen

    2006-01-01

    The standard proof of the divergence theorem in undergraduate calculus courses covers the theorem for static domains between two graph surfaces. We show that within first year undergraduate curriculum, the flow proof of the dynamic version of the divergence theorem - which is usually considered only much later in more advanced math courses - is comprehensible with only a little extension of the first year curriculum. Moreover, it is more intuitive than the static proof. We support this intuit...

  20. A Converse to the Cayley-Hamilton Theorem

    follows that qj = api, where a is a unit. Thus, we must have that the expansion of I into irreducibles is unique. Hence, K[x] is a UFD. A famous theorem of Gauss implies that K[XI' X2,. ,xn] is also an UFD. Gauss's Theorem: R[x] is a UFD, if and only if R is a UFD. For a proof of Gauss's theorem and a detailed proof of the fact that ...

  1. Goedel incompleteness theorems and the limits of their applicability. I

    Beklemishev, Lev D [Steklov Mathematical Institute, Russian Academy of Sciences, Moscow (Russian Federation)

    2011-01-25

    This is a survey of results related to the Goedel incompleteness theorems and the limits of their applicability. The first part of the paper discusses Goedel's own formulations along with modern strengthenings of the first incompleteness theorem. Various forms and proofs of this theorem are compared. Incompleteness results related to algorithmic problems and mathematically natural examples of unprovable statements are discussed. Bibliography: 68 titles.

  2. From Einstein's theorem to Bell's theorem: a history of quantum non-locality

    Wiseman, H. M.

    2006-04-01

    In this Einstein Year of Physics it seems appropriate to look at an important aspect of Einstein's work that is often down-played: his contribution to the debate on the interpretation of quantum mechanics. Contrary to physics ‘folklore’, Bohr had no defence against Einstein's 1935 attack (the EPR paper) on the claimed completeness of orthodox quantum mechanics. I suggest that Einstein's argument, as stated most clearly in 1946, could justly be called Einstein's reality locality completeness theorem, since it proves that one of these three must be false. Einstein's instinct was that completeness of orthodox quantum mechanics was the falsehood, but he failed in his quest to find a more complete theory that respected reality and locality. Einstein's theorem, and possibly Einstein's failure, inspired John Bell in 1964 to prove his reality locality theorem. This strengthened Einstein's theorem (but showed the futility of his quest) by demonstrating that either reality or locality is a falsehood. This revealed the full non-locality of the quantum world for the first time.

  3. Generalizations of the Nash Equilibrium Theorem in the KKM Theory

    Sehie Park

    2010-01-01

    Full Text Available The partial KKM principle for an abstract convex space is an abstract form of the classical KKM theorem. In this paper, we derive generalized forms of the Ky Fan minimax inequality, the von Neumann-Sion minimax theorem, the von Neumann-Fan intersection theorem, the Fan-type analytic alternative, and the Nash equilibrium theorem for abstract convex spaces satisfying the partial KKM principle. These results are compared with previously known cases for G-convex spaces. Consequently, our results unify and generalize most of previously known particular cases of the same nature. Finally, we add some detailed historical remarks on related topics.

  4. The minimum knowledge base for predicting organ-at-risk dose-volume levels and plan-related complications in IMRT planning

    Zhang, Hao H; D'Souza, Warren D; Meyer, Robert R; Shi Leyuan

    2010-01-01

    IMRT treatment planning requires consideration of two competing objectives: achieving the required amount of radiation for the planning target volume and minimizing the amount of radiation delivered to all other tissues. It is important for planners to understand the tradeoff between competing factors so that the time-consuming human interaction loop (plan-evaluate-modify) can be eliminated. Treatment-plan-surface models have been proposed as a decision support tool to aid treatment planners and clinicians in choosing between rival treatment plans in a multi-plan environment. In this paper, an empirical approach is introduced to determine the minimum number of treatment plans (minimum knowledge base) required to build accurate representations of the IMRT plan surface in order to predict organ-at-risk (OAR) dose-volume (DV) levels and complications as a function of input DV constraint settings corresponding to all involved OARs in the plan. We have tested our approach on five head and neck patients and five whole pelvis/prostate patients. Our results suggest that approximately 30 plans were sufficient to predict DV levels with less than 3% relative error in both head and neck and whole pelvis/prostate cases. In addition, approximately 30-60 plans were sufficient to predict saliva flow rate with less than 2% relative error and to classify rectal bleeding with an accuracy of 90%.

  5. A minimum operating system based on the SM5300.01 magnetic tape recorder for the Micro-8 computer

    Kartashov, S.V.

    1987-01-01

    An operating system (OS) for microcomputers based on INTEL-8080, 8085 microprocessors oriented to use a magnetic tape recorder is described. This system comprises a tape-recorder manager and a file structure organization system (nucleus of OS), a symbol text editor, a macroassembler, an interactive disasembler and a program of communication with an EC-computer. The OS makes it possible to develop, debug, store and exploit the program written in INTEL-8085 assembly language

  6. Vehicular Networking Enhancement And Multi-Channel Routing Optimization, Based on Multi-Objective Metric and Minimum Spanning Tree

    Peppino Fazio

    2013-01-01

    Full Text Available Vehicular Ad hoc NETworks (VANETs represent a particular mobile technology that permits the communication among vehicles, offering security and comfort. Nowadays, distributed mobile wireless computing is becoming a very important communications paradigm, due to its flexibility to adapt to different mobile applications. VANETs are a practical example of data exchanging among real mobile nodes. To enable communications within an ad-hoc network, characterized by continuous node movements, routing protocols are needed to react to frequent changes in network topology. In this paper, the attention is focused mainly on the network layer of VANETs, proposing a novel approach to reduce the interference level during mobile transmission, based on the multi-channel nature of IEEE 802.11p (1609.4 standard. In this work a new routing protocol based on Distance Vector algorithm is presented to reduce the delay end to end and to increase packet delivery ratio (PDR and throughput in VANETs. A new metric is also proposed, based on the maximization of the average Signal-to-Interference Ratio (SIR level and the link duration probability between two VANET nodes. In order to relieve the effects of the co-channel interference perceived by mobile nodes, transmission channels are switched on a basis of a periodical SIR evaluation. A Network Simulator has been used for implementing and testing the proposed idea.

  7. Randomized central limit theorems: A unified theory.

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  8. Birth of a theorem a mathematical adventure

    Villani, Cédric

    2015-01-01

    This man could plainly do for mathematics what Brian Cox has done for physics" (Sunday Times). What goes on inside the mind of a rock-star mathematician? Where does inspiration come from? With a storyteller's gift, Cedric Villani takes us on a mesmerising journey as he wrestles with a new theorem that will win him the most coveted prize in mathematics. Along the way he encounters obstacles and setbacks, losses of faith and even brushes with madness. His story is one of courage and partnership, doubt and anxiety, elation and despair. We discover how it feels to be obsessed by a theorem during your child's cello practise and throughout your dreams, why appreciating maths is a bit like watching an episode of Columbo, and how sometimes inspiration only comes from locking yourself away in a dark room to think. Blending science with history, biography with myth, Villani conjures up an inimitable cast of characters including the omnipresent Einstein, mad genius Kurt Godel, and Villani's personal hero, John Nash. Bir...

  9. Relativistic particle dynamics: Lagrangian proof of the no-interaction theorem

    Marmo, G.; Mukunda, N.; Sudarshan, E.C.G.

    1983-11-01

    An economical proof is given, in the Lagrangian framework, of the No Interaction Theorem of relativistic particle mechanics. It is based on the assumption that there is a Lagrangian, which if singular is allowed to lead at most to primary first class constraints. The proof works with Lagrange rather than Poisson brackets, leading to considerable simplifications compared to other proofs

  10. Convolution Theorem of Fractional Fourier Transformation Derived by Representation Transformation in Quantum Mechancis

    Fan Hongyi; Hao Ren; Lu Hailiang

    2008-01-01

    Based on our previous paper (Commun. Theor. Phys. 39 (2003) 417) we derive the convolution theorem of fractional Fourier transformation in the context of quantum mechanics, which seems a convenient and neat way. Generalization of this method to the complex fractional Fourier transformation case is also possible

  11. An application of Darbo\\'s fixed point theorem in the relative ...

    Sufficient conditions for the relative controllability of a class of nonlinear systems with distributed delays in the control are established. Our results are based on the measure of non-compactness of a set and the Darbo's fixed point theorem. Global Jouranl of Mathematical Sciences Vol. 6 (1) 2007: pp. 21-26 ...

  12. De Finetti representation theorem for infinite-dimensional quantum systems and applications to quantum cryptography.

    Renner, R; Cirac, J I

    2009-03-20

    We show that the quantum de Finetti theorem holds for states on infinite-dimensional systems, provided they satisfy certain experimentally verifiable conditions. This result can be applied to prove the security of quantum key distribution based on weak coherent states or other continuous variable states against general attacks.

  13. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  14. Enhanced IMC based PID controller design for non-minimum phase (NMP) integrating processes with time delays.

    Ghousiya Begum, K; Seshagiri Rao, A; Radhakrishnan, T K

    2017-05-01

    Internal model control (IMC) with optimal H 2 minimization framework is proposed in this paper for design of proportional-integral-derivative (PID) controllers. The controller design is addressed for integrating and double integrating time delay processes with right half plane (RHP) zeros. Blaschke product is used to derive the optimal controller. There is a single adjustable closed loop tuning parameter for controller design. Systematic guidelines are provided for selection of this tuning parameter based on maximum sensitivity. Simulation studies have been carried out on various integrating time delay processes to show the advantages of the proposed method. The proposed controller provides enhanced closed loop performances when compared to recently reported methods in the literature. Quantitative comparative analysis has been carried out using the performance indices, Integral Absolute Error (IAE) and Total Variation (TV). Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Minimum entropy production principle

    Maes, C.; Netočný, Karel

    2013-01-01

    Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle

  16. Maximize Minimum Utility Function of Fractional Cloud Computing System Based on Search Algorithm Utilizing the Mittag-Leffler Sum

    Rabha W. Ibrahim

    2018-01-01

    Full Text Available The maximum min utility function (MMUF problem is an important representative of a large class of cloud computing systems (CCS. Having numerous applications in practice, especially in economy and industry. This paper introduces an effective solution-based search (SBS algorithm for solving the problem MMUF. First, we suggest a new formula of the utility function in term of the capacity of the cloud. We formulate the capacity in CCS, by using a fractional diffeo-integral equation. This equation usually describes the flow of CCS. The new formula of the utility function is modified recent active utility functions. The suggested technique first creates a high-quality initial solution by eliminating the less promising components, and then develops the quality of the achieved solution by the summation search solution (SSS. This method is considered by the Mittag-Leffler sum as hash functions to determine the position of the agent. Experimental results commonly utilized in the literature demonstrate that the proposed algorithm competes approvingly with the state-of-the-art algorithms both in terms of solution quality and computational efficiency.

  17. Global minimum profile error (GMPE) - a least-squares-based approach for extracting macroscopic rate coefficients for complex gas-phase chemical reactions.

    Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K

    2018-01-03

    Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.

  18. Chaos control of the brushless direct current motor using adaptive dynamic surface control based on neural network with the minimum weights

    Luo, Shaohua; Wu, Songli; Gao, Ruizhen

    2015-01-01

    This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in the closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation

  19. Chaos control of the brushless direct current motor using adaptive dynamic surface control based on neural network with the minimum weights.

    Luo, Shaohua; Wu, Songli; Gao, Ruizhen

    2015-07-01

    This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in the closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation.

  20. A version of Stone-Weierstrass theorem in Fuzzy Analysis

    Font, J.J.; Sanchis, D.; Sanchis, M.

    2017-07-01

    Fuzzy numbers provide formalized tools to deal with non-precise quantities. They are indeed fuzzy sets in the real line and were introduced in 1978 by Dubois and Prade , who also defined their basic operations. Since then, Fuzzy Analysis has developed based on the notion of fuzzy number just as much as classical Real Analysis did based on the concept of real number. Such development was eased by a characterization of fuzzy numbers provided in 1986 by Goetschel and Voxman leaning on their level sets. As in the classical setting, continuous fuzzy-valued functions (fuzzy functions) are the central core of the theory. The principal difference with regard to real-valued continuous functions is the fact that the fuzzy numbers do not form a vectorial space, which determines all the results, and, especially, the proofs. The study of fuzzy functions has developed, principally, about two lines of investigation: - Differential fuzzy equations, which have turned out to be the natural way of modelling physical and engineering problems in contexts where the parameters are vague or incomplete. - The problem of approximation of fuzzy functions, basically using the approximation capability of fuzzy neural networks. We will focus on this second line of investigation, though our approach will be more general and based on an adaptation of the famous Stone-Weierstrass Theorem to the fuzzy context. This way so, we introduce the concept of “multiplier” of a set of fuzzy functions and use it to give a constructive proof of a Stone-Weiestrass type theorem for fuzzy functions. (Author)

  1. A perceptron network theorem prover for the propositional calculus

    Drossaers, M.F.J.

    In this paper a short introduction to neural networks and a design for a perceptron network theorem prover for the propositional calculus are presented. The theorem prover is a representation of a variant of the semantic tableau method, called the parallel tableau method, by a network of

  2. Leaning on Socrates to Derive the Pythagorean Theorem

    Percy, Andrew; Carr, Alistair

    2010-01-01

    The one theorem just about every student remembers from school is the theorem about the side lengths of a right angled triangle which Euclid attributed to Pythagoras when writing Proposition 47 of "The Elements". Usually first met in middle school, the student will be continually exposed throughout their mathematical education to the…

  3. A new proof of the positive energy theorem

    Witten, E.

    1981-01-01

    A new proof is given of the positive energy theorem of classical general relativity. Also, a new proof is given that there are no asymptotically Euclidean gravitational instantons. (These theorems have been proved previously, by a different method, by Schoen and Yau). The relevance of these results to the stability of Minkowski space is discussed. (orig.)

  4. COMPARISON THEOREM OF BACKWARD DOUBLY STOCHASTIC DIFFERENTIAL EQUATIONS

    2010-01-01

    This paper is devoted to deriving a comparison theorem of solutions to backward doubly stochastic differential equations driven by Brownian motion and backward It-Kunita integral. By the application of this theorem, we give an existence result of the solutions to these equations with continuous coefficients.

  5. The Boundary Crossing Theorem and the Maximal Stability Interval

    Jorge-Antonio López-Renteria

    2011-01-01

    useful tools in the study of the stability of family of polynomials. Although both of these theorem seem intuitively obvious, they can be used for proving important results. In this paper, we give generalizations of these two theorems and we apply such generalizations for finding the maximal stability interval.

  6. K S Krishnan's 1948 Perception of the Sampling Theorem

    K S Krishnan's 1948 Perception of the. Sampling Theorem. Raiiah Simon is a. Professor at the Institute of Mathematical. Sciences, Chennai. His primary interests are in classical and quantum optics, geometric phases, group theoretical techniques and quantum information science. Keywords. Sompling theorem, K S ...

  7. On Frobenius, Mazur, and Gelfand-Mazur theorems on division ...

    ... R of real numbers, the field C of complex numbers, or the non-commutative algebra Q of quaternions. Gelfand [15] proved that every normed division algebra over the field C is isomorphic to C. He named this theorem, which is fundamental for the development of the theory of Banach Algebras, the Gelfand-Mazur theorem.

  8. An extension of Brosowski-Meinardus theorem on invariant approximation

    Liaqat Ali Khan; Abdul Rahim Khan.

    1991-07-01

    We obtain a generalization of a fixed point theorem of Dotson for non-expansive mappings on star-shaped sets and then use it to prove a unified Brosowski-Meinardus theorem on invariant approximation in the setting of p-normed linear spaces. (author). 13 refs

  9. A power counting theorem for Feynman integrals on the lattice

    Reisz, T.

    1988-01-01

    A convergence theorem is proved, which states sufficient conditions for the existence of the continuum limit for a wide class of Feynman integrals on a space-time lattice. A new kind of a UV-divergence degree is introduced, which allows the formulation of the theorem in terms of power counting conditions. (orig.)

  10. A Hohenberg-Kohn theorem for non-local potentials

    Meron, E.; Katriel, J.

    1977-01-01

    It is shown that within any class of commuting one-body potentials a Hohenberg-Kohn type theorem is satisfied with respect to an appropriately defined density. The Hohenberg-Kohn theorem for local potentials follows as a special case. (Auth.)

  11. A note on the homomorphism theorem for hemirings

    D. M. Olson

    1978-01-01

    Full Text Available The fundamental homomorphism theorem for rings is not generally applicable in hemiring theory. In this paper, we show that for the class of N-homomorphism of hemirings the fundamental theorem is valid. In addition, the concept of N-homomorphism is used to prove that every hereditarily semisubtractive hemiring is of type (K.

  12. On the Riesz representation theorem and integral operators ...

    We present a Riesz representation theorem in the setting of extended integration theory as introduced in [6]. The result is used to obtain boundedness theorems for integral operators in the more general setting of spaces of vector valued extended integrable functions. Keywords: Vector integral, integral operators, operator ...

  13. Formal Analysis of Soft Errors using Theorem Proving

    Sofiène Tahar

    2013-07-01

    Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.

  14. Testing ground for fluctuation theorems: The one-dimensional Ising model

    Lemos, C. G. O.; Santos, M.; Ferreira, A. L.; Figueiredo, W.

    2018-04-01

    In this paper we determine the nonequilibrium magnetic work performed on a Ising model and relate it to the fluctuation theorem derived some years ago by Jarzynski. The basic idea behind this theorem is the relationship connecting the free energy difference between two thermodynamic states of a system and the average work performed by an external agent, in a finite time, through nonequilibrium paths between the same thermodynamic states. We test the validity of this theorem by considering the one-dimensional Ising model where the free energy is exactly determined as a function of temperature and magnetic field. We have found that the Jarzynski theorem remains valid for all the values of the rate of variation of the magnetic field applied to the system. We have also determined the probability distribution function for the work performed on the system for the forward and reverse processes and verified that predictions based on the Crooks relation are equally correct. We also propose a method to calculate the lag between the current state of the system and that of the equilibrium based on macroscopic variables. We have shown that the lag increases with the sweeping rate of the field at its final value for the reverse process, while it decreases in the case of the forward process. The lag increases linearly with the size of the chain and with a slope decreasing with the inverse of the rate of variation of the field.

  15. Bell's "Theorem": loopholes vs. conceptual flaws

    Kracklauer, A. F.

    2017-12-01

    An historical overview and detailed explication of a critical analysis of what has become known as Bell's Theorem to the effect that, it should be impossible to extend Quantum Theory with the addition of local, real variables so as to obtain a version free of the ambiguous and preternatural features of the currently accepted interpretations is presented. The central point on which this critical analysis, due originally to Edwin Jaynes, is that Bell incorrectly applied probabilistic formulas involving conditional probabilities. In addition, mathematical technicalities that have complicated the understanding of the logical or mathematical setting in which current theory and experimentation are embedded, are discussed. Finally, some historical speculations on the sociological environment, in particular misleading aspects, in which recent generations of physicists lived and worked are mentioned.

  16. Theorem Proving in Intel Hardware Design

    O'Leary, John

    2009-01-01

    For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.

  17. Virial Theorem in Nonlocal Newtonian Gravity

    Bahram Mashhoon

    2016-05-01

    Full Text Available Nonlocal gravity is the recent classical nonlocal generalization of Einstein’s theory of gravitation in which the past history of the gravitational field is taken into account. In this theory, nonlocality appears to simulate dark matter. The virial theorem for the Newtonian regime of nonlocal gravity theory is derived and its consequences for “isolated” astronomical systems in virial equilibrium at the present epoch are investigated. In particular, for a sufficiently isolated nearby galaxy in virial equilibrium, the galaxy’s baryonic diameter D 0 —namely, the diameter of the smallest sphere that completely surrounds the baryonic system at the present time—is predicted to be larger than the effective dark matter fraction f D M times a universal length that is the basic nonlocality length scale λ 0 ≈ 3 ± 2 kpc.

  18. On a curvature-statistics theorem

    Calixto, M; Aldaya, V

    2008-01-01

    The spin-statistics theorem in quantum field theory relates the spin of a particle to the statistics obeyed by that particle. Here we investigate an interesting correspondence or connection between curvature (κ = ±1) and quantum statistics (Fermi-Dirac and Bose-Einstein, respectively). The interrelation between both concepts is established through vacuum coherent configurations of zero modes in quantum field theory on the compact O(3) and noncompact O(2; 1) (spatial) isometry subgroups of de Sitter and Anti de Sitter spaces, respectively. The high frequency limit, is retrieved as a (zero curvature) group contraction to the Newton-Hooke (harmonic oscillator) group. We also make some comments on the physical significance of the vacuum energy density and the cosmological constant problem.

  19. On a curvature-statistics theorem

    Calixto, M [Departamento de Matematica Aplicada y Estadistica, Universidad Politecnica de Cartagena, Paseo Alfonso XIII 56, 30203 Cartagena (Spain); Aldaya, V [Instituto de Astrofisica de Andalucia, Apartado Postal 3004, 18080 Granada (Spain)], E-mail: Manuel.Calixto@upct.es

    2008-08-15

    The spin-statistics theorem in quantum field theory relates the spin of a particle to the statistics obeyed by that particle. Here we investigate an interesting correspondence or connection between curvature ({kappa} = {+-}1) and quantum statistics (Fermi-Dirac and Bose-Einstein, respectively). The interrelation between both concepts is established through vacuum coherent configurations of zero modes in quantum field theory on the compact O(3) and noncompact O(2; 1) (spatial) isometry subgroups of de Sitter and Anti de Sitter spaces, respectively. The high frequency limit, is retrieved as a (zero curvature) group contraction to the Newton-Hooke (harmonic oscillator) group. We also make some comments on the physical significance of the vacuum energy density and the cosmological constant problem.

  20. An interlacing theorem for reversible Markov chains

    Grone, Robert; Salamon, Peter; Hoffmann, Karl Heinz

    2008-01-01

    Reversible Markov chains are an indispensable tool in the modeling of a vast class of physical, chemical, biological and statistical problems. Examples include the master equation descriptions of relaxing physical systems, stochastic optimization algorithms such as simulated annealing, chemical dynamics of protein folding and Markov chain Monte Carlo statistical estimation. Very often the large size of the state spaces requires the coarse graining or lumping of microstates into fewer mesoscopic states, and a question of utmost importance for the validity of the physical model is how the eigenvalues of the corresponding stochastic matrix change under this operation. In this paper we prove an interlacing theorem which gives explicit bounds on the eigenvalues of the lumped stochastic matrix. (fast track communication)

  1. An interlacing theorem for reversible Markov chains

    Grone, Robert; Salamon, Peter [Department of Mathematics and Statistics, San Diego State University, San Diego, CA 92182-7720 (United States); Hoffmann, Karl Heinz [Institut fuer Physik, Technische Universitaet Chemnitz, D-09107 Chemnitz (Germany)

    2008-05-30

    Reversible Markov chains are an indispensable tool in the modeling of a vast class of physical, chemical, biological and statistical problems. Examples include the master equation descriptions of relaxing physical systems, stochastic optimization algorithms such as simulated annealing, chemical dynamics of protein folding and Markov chain Monte Carlo statistical estimation. Very often the large size of the state spaces requires the coarse graining or lumping of microstates into fewer mesoscopic states, and a question of utmost importance for the validity of the physical model is how the eigenvalues of the corresponding stochastic matrix change under this operation. In this paper we prove an interlacing theorem which gives explicit bounds on the eigenvalues of the lumped stochastic matrix. (fast track communication)

  2. Asset management using an extended Markowitz theorem

    Paria Karimi

    2014-06-01

    Full Text Available Markowitz theorem is one of the most popular techniques for asset management. The method has been widely used to solve many applications, successfully. In this paper, we present a multi objective Markowitz model to determine asset allocation by considering cardinality constraints. The resulted model is an NP-Hard problem and the proposed study uses two metaheuristics, namely genetic algorithm (GA and particle swarm optimization (PSO to find efficient solutions. The proposed study has been applied on some data collected from Tehran Stock Exchange over the period 2009-2011. The study considers four objectives including cash return, 12-month return, 36-month return and Lower Partial Moment (LPM. The results indicate that there was no statistical difference between the implementation of PSO and GA methods.

  3. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. An impossibility theorem for parameter independent hidden variable theories

    Leegwater, Gijs

    2016-05-01

    Recently, Roger Colbeck and Renato Renner (C&R) have claimed that '[n]o extension of quantum theory can have improved predictive power' (Colbeck & Renner, 2011, 2012b). If correct, this is a spectacular impossibility theorem for hidden variable theories, which is more general than the theorems of Bell (1964) and Leggett (2003). Also, C&R have used their claim in attempt to prove that a system's quantum-mechanical wave function is in a one-to-one correspondence with its 'ontic' state (Colbeck & Renner, 2012a). C&R's claim essentially means that in any hidden variable theory that is compatible with quantum-mechanical predictions, probabilities of measurement outcomes are independent of these hidden variables. This makes such variables otiose. On closer inspection, however, the generality and validity of the claim can be contested. First, it is based on an assumption called 'Freedom of Choice'. As the name suggests, this assumption involves the independence of an experimenter's choice of measurement settings. But in the way C&R define this assumption, a no-signalling condition is surreptitiously presupposed, making the assumption less innocent than it sounds. When using this definition, any hidden variable theory violating parameter independence, such as Bohmian Mechanics, is immediately shown to be incompatible with quantum-mechanical predictions. Also, the argument of C&R is hard to follow and their mathematical derivation contains several gaps, some of which cannot be closed in the way they suggest. We shall show that these gaps can be filled. The issue with the 'Freedom of Choice' assumption can be circumvented by explicitly assuming parameter independence. This makes the result less general, but better founded. We then obtain an impossibility theorem for hidden variable theories satisfying parameter independence only. As stated above, such hidden variable theories are impossible in the sense that any supplemental variables have no bearing on outcome probabilities

  5. Mechanistic slumber vs. statistical insomnia: the early history of Boltzmann's H-theorem (1868-1877)

    Badino, M.

    2011-11-01

    An intricate, long, and occasionally heated debate surrounds Boltzmann's H-theorem (1872) and his combinatorial interpretation of the second law (1877). After almost a century of devoted and knowledgeable scholarship, there is still no agreement as to whether Boltzmann changed his view of the second law after Loschmidt's 1876 reversibility argument or whether he had already been holding a probabilistic conception for some years at that point. In this paper, I argue that there was no abrupt statistical turn. In the first part, I discuss the development of Boltzmann's research from 1868 to the formulation of the H-theorem. This reconstruction shows that Boltzmann adopted a pluralistic strategy based on the interplay between a kinetic and a combinatorial approach. Moreover, it shows that the extensive use of asymptotic conditions allowed Boltzmann to bracket the problem of exceptions. In the second part I suggest that both Loschmidt's challenge and Boltzmann's response to it did not concern the H-theorem. The close relation between the theorem and the reversibility argument is a consequence of later investigations on the subject.

  6. Minimum entropy principle-based solar cell operation without a pn-junction and a thin CdS layer to extract the holes from the emitter

    Böer, Karl W.

    2016-10-01

    The solar cell does not use a pn-junction to separate electrons from holes, but uses an undoped CdS layer that is p-type inverted when attached to a p-type collector and collects the holes while rejecting the backflow of electrons and thereby prevents junction leakage. The operation of the solar cell is determined by the minimum entropy principle of the cell and its external circuit that determines the electrochemical potential, i.e., the Fermi-level of the base electrode to the operating (maximum power point) voltage. It leaves the Fermi level of the metal electrode of the CdS unchanged, since CdS does not participate in the photo-emf. All photoelectric actions are generated by the holes excited from the light that causes the shift of the quasi-Fermi levels in the generator and supports the diffusion current in operating conditions. It is responsible for the measured solar maximum power current. The open circuit voltage (Voc) can approach its theoretical limit of the band gap of the collector at 0 K and the cell increases the efficiency at AM1 to 21% for a thin-film CdS/CdTe that is given as an example here. However, a series resistance of the CdS forces a limitation of its thickness to preferably below 200 Å to avoid unnecessary reduction in efficiency or Voc. The operation of the CdS solar cell does not involve heated carriers. It is initiated by the field at the CdS/CdTe interface that exceeds 20 kV/cm that is sufficient to cause extraction of holes by the CdS that is inverted to become p-type. Here a strong doubly charged intrinsic donor can cause a negative differential conductivity that switches-on a high-field domain that is stabilized by the minimum entropy principle and permits an efficient transport of the holes from the CdTe to the base electrode. Experimental results of the band model of CdS/CdTe solar cells are given and show that the conduction bands are connected in the dark, where the electron current must be continuous, and the valence bands are

  7. On Pythagoras Theorem for Products of Spectral Triples

    D'Andrea, Francesco; Martinetti, Pierre

    2013-05-01

    We discuss a version of Pythagoras theorem in noncommutative geometry. Usual Pythagoras theorem can be formulated in terms of Connes' distance, between pure states, in the product of commutative spectral triples. We investigate the generalization to both non-pure states and arbitrary spectral triples. We show that Pythagoras theorem is replaced by some Pythagoras inequalities, that we prove for the product of arbitrary (i.e. non-necessarily commutative) spectral triples, assuming only some unitality condition. We show that these inequalities are optimal, and we provide non-unital counter-examples inspired by K-homology.

  8. Fluctuation theorem for Hamiltonian Systems: Le Chatelier's principle

    Evans, Denis J.; Searles, Debra J.; Mittag, Emil

    2001-05-01

    For thermostated dissipative systems, the fluctuation theorem gives an analytical expression for the ratio of probabilities that the time-averaged entropy production in a finite system observed for a finite time takes on a specified value compared to the negative of that value. In the past, it has been generally thought that the presence of some thermostating mechanism was an essential component of any system that satisfies a fluctuation theorem. In the present paper, we point out that a fluctuation theorem can be derived for purely Hamiltonian systems, with or without applied dissipative fields.

  9. An Almost Sure Ergodic Theorem for Quasistatic Dynamical Systems

    Stenlund, Mikko

    2016-01-01

    We prove an almost sure ergodic theorem for abstract quasistatic dynamical systems, as an attempt of taking steps toward an ergodic theory of such systems. The result at issue is meant to serve as a working counterpart of Birkhoff’s ergodic theorem which fails in the quasistatic setup. It is formulated so that the conditions, which essentially require sufficiently good memory-loss properties, could be verified in a straightforward way in physical applications. We also introduce the concept of a physical family of measures for a quasistatic dynamical system. These objects manifest themselves, for instance, in numerical experiments. We then illustrate the use of the theorem by examples.

  10. A note on the weighted Khintchine-Groshev Theorem

    Hussain, Mumtaz; Yusupova, Tatiana

    Let W(m,n;ψ−−) denote the set of ψ1,…,ψn-approximable points in Rmn. The classical Khintchine-Groshev theorem assumes a monotonicity condition on the approximating functions ψ−−. Removing monotonicity from the Khintchine-Groshev theorem is attributed to different authors for different cases of m...... and n. It can not be removed for m=n=1 as Duffin-Shcaeffer provided the counter example. We deal with the only remaining case m=2 and thereby remove all unnecessary conditions from the Khintchine-Groshev theorem....

  11. Quantum voting and violation of Arrow's impossibility theorem

    Bao, Ning; Yunger Halpern, Nicole

    2017-06-01

    We propose a quantum voting system in the spirit of quantum games such as the quantum prisoner's dilemma. Our scheme enables a constitution to violate a quantum analog of Arrow's impossibility theorem. Arrow's theorem is a claim proved deductively in economics: Every (classical) constitution endowed with three innocuous-seeming properties is a dictatorship. We construct quantum analogs of constitutions, of the properties, and of Arrow's theorem. A quantum version of majority rule, we show, violates this quantum Arrow conjecture. Our voting system allows for tactical-voting strategies reliant on entanglement, interference, and superpositions. This contribution to quantum game theory helps elucidate how quantum phenomena can be harnessed for strategic advantage.

  12. Convergence theorems for certain classes of nonlinear mappings

    Chidume, C.E.

    1992-01-01

    Recently, Xinlong Weng announced a convergence theorem for the iterative approximation of fixed points of local strictly pseudo-contractive mappings in uniformly smooth Banach spaces, (Proc. Amer. Math. Soc. Vol.113, No.3 (1991) 727-731). An example is presented which shows that this theorem of Weng is false. Then, a convergence theorem is proved, in certain real Banach spaces, for approximation a solution of the inclusion f is an element of x + Tx, where T is a set-valued monotone operator. An explicit error estimate is also presented. (author). 26 refs

  13. Direct and converse theorems the elements of symbolic logic

    Gradshtein, I S; Stark, M; Ulam, S

    1963-01-01

    Direct and Converse Theorems: The Elements of Symbolic Logic, Third Edition explains the logical relations between direct, converse, inverse, and inverse converse theorems, as well as the concept of necessary and sufficient conditions. This book consists of two chapters. The first chapter is devoted to the question of negation. Connected with the question of the negation of a proposition are interrelations of the direct and converse and also of the direct and inverse theorems; the interrelations of necessary and sufficient conditions; and the definition of the locus of a point. The second chap

  14. A primer on Higgs boson low-energy theorems

    Dawson, S.; Haber, H.E.; California Univ., Santa Cruz, CA

    1989-05-01

    We give a pedagogical review of Higgs boson low-energy theorems and their applications in the study of light Higgs boson interactions with mesons and baryons. In particular, it is shown how to combine the chiral Lagrangian method with the Higgs low-energy theorems to obtain predictions for the interaction of Higgs bosons and pseudoscalar mesons. Finally, we discuss the relation between the low-energy theorems and a technique which makes use of the trace of the QCD energy-momentum tensor. 35 refs

  15. An Almost Sure Ergodic Theorem for Quasistatic Dynamical Systems

    Stenlund, Mikko, E-mail: mikko.stenlund@helsinki.fi [University of Helsinki, Department of Mathematics and Statistics (Finland)

    2016-09-15

    We prove an almost sure ergodic theorem for abstract quasistatic dynamical systems, as an attempt of taking steps toward an ergodic theory of such systems. The result at issue is meant to serve as a working counterpart of Birkhoff’s ergodic theorem which fails in the quasistatic setup. It is formulated so that the conditions, which essentially require sufficiently good memory-loss properties, could be verified in a straightforward way in physical applications. We also introduce the concept of a physical family of measures for a quasistatic dynamical system. These objects manifest themselves, for instance, in numerical experiments. We then illustrate the use of the theorem by examples.

  16. Flat deformation theorem and symmetries in spacetime

    Llosa, Josep; Carot, Jaume

    2009-01-01

    The flat deformation theorem states that given a semi-Riemannian analytic metric g on a manifold, locally there always exists a two-form F, a scalar function c, and an arbitrarily prescribed scalar constraint depending on the point x of the manifold and on F and c, say Ψ(c, F, x) = 0, such that the deformed metric η = cg - εF 2 is semi-Riemannian and flat. In this paper we first show that the above result implies that every (Lorentzian analytic) metric g may be written in the extended Kerr-Schild form, namely η ab := ag ab - 2bk (a l b) where η is flat and k a , l a are two null covectors such that k a l a = -1; next we show how the symmetries of g are connected to those of η, more precisely; we show that if the original metric g admits a conformal Killing vector (including Killing vectors and homotheties), then the deformation may be carried out in a way such that the flat deformed metric η 'inherits' that symmetry.

  17. The Michaelis-Menten-Stueckelberg Theorem

    Alexander N. Gorban

    2011-05-01

    Full Text Available We study chemical reactions with complex mechanisms under two assumptions: (i intermediates are present in small amounts (this is the quasi-steady-state hypothesis or QSS and (ii they are in equilibrium relations with substrates (this is the quasiequilibrium hypothesis or QE. Under these assumptions, we prove the generalized mass action law together with the basic relations between kinetic factors, which are sufficient for the positivity of the entropy production but hold even without microreversibility, when the detailed balance is not applicable. Even though QE and QSS produce useful approximations by themselves, only the combination of these assumptions can render the possibility beyond the “rarefied gas” limit or the “molecular chaos” hypotheses. We do not use any a priori form of the kinetic law for the chemical reactions and describe their equilibria by thermodynamic relations. The transformations of the intermediate compounds can be described by the Markov kinetics because of their low density (low density of elementary events. This combination of assumptions was introduced by Michaelis and Menten in 1913. In 1952, Stueckelberg used the same assumptions for the gas kinetics and produced the remarkable semi-detailed balance relations between collision rates in the Boltzmann equation that are weaker than the detailed balance conditions but are still sufficient for the Boltzmann H-theorem to be valid. Our results are obtained within the Michaelis-Menten-Stueckelbeg conceptual framework.

  18. Calculation of Appropriate Minimum Size of Isolation Rooms based on Questionnaire Survey of Experts and Analysis on Conditions of Isolation Room Use

    Won, An-Na; Song, Hae-Eun; Yang, Young-Kwon; Park, Jin-Chul; Hwang, Jung-Ha

    2017-07-01

    After the outbreak of the MERS (Middle East Respiratory Syndrome) epidemic, issues were raised regarding response capabilities of medical institutions, including the lack of isolation rooms at hospitals. Since then, the government of Korea has been revising regulations to enforce medical laws in order to expand the operation of isolation rooms and to strengthen standards regarding their mandatory installation at hospitals. Among general and tertiary hospitals in Korea, a total of 159 are estimated to be required to install isolation rooms to meet minimum standards. For the purpose of contributing to hospital construction plans in the future, this study conducted a questionnaire survey of experts and analysed the environment and devices necessary in isolation rooms, to determine their appropriate minimum size to treat patients. The result of the analysis is as follows: First, isolation rooms at hospitals are required to have a minimum 3,300mm minor axis and a minimum 5,000mm major axis for the isolation room itself, and a minimum 1,800mm minor axis for the antechamber where personal protective equipment is donned and removed. Second, the 15 ㎡-or-larger standard for the floor area of isolation rooms will have to be reviewed and standards for the minimum width of isolation rooms will have to be established.

  19. Rising above the Minimum Wage.

    Even, William; Macpherson, David

    An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…

  20. Two proofs of Fine's theorem

    Halliwell, J.J.

    2014-01-01

    Fine's theorem concerns the question of determining the conditions under which a certain set of probabilities for pairs of four bivalent quantities may be taken to be the marginals of an underlying probability distribution. The eight CHSH inequalities are well-known to be necessary conditions, but Fine's theorem is the striking result that they are also sufficient conditions. Here two transparent and self-contained proofs of Fine's theorem are presented. The first is a physically motivated proof using an explicit local hidden variables model. The second is an algebraic proof which uses a representation of the probabilities in terms of correlation functions. - Highlights: • A discussion of the various approaches to proving Fine's theorem. • A new physically-motivated proof using a local hidden variables model. • A new algebraic proof. • A new form of the CHSH inequalities

  1. Forest Carbon Uptake and the Fundamental Theorem of Calculus

    Zobitz, John

    2013-01-01

    Using the fundamental theorem of calculus and numerical integration, we investigate carbon absorption of ecosystems with measurements from a global database. The results illustrate the dynamic nature of ecosystems and their ability to absorb atmospheric carbon.

  2. The power counting theorem for Feynman integrals with massless propagators

    Lowenstein, J.H.

    2000-01-01

    Dyson's power counting theorem is extended to the case where some of the mass parameters vanish. Weinberg's ultraviolet convergence conditions are supplemented by infrared convergence conditions which combined are sufficient for the convergence of Feynman integrals. (orig.)

  3. The power counting theorem for Feynman integrals with massless propagators

    Lowenstein, J.H.

    1975-01-01

    Dyson's power counting theorem is extended to the case where some of the mass parameters vanish. Weinberg's ultraviolet convergence conditions are supplemented by infrared convergence conditions which combined are sufficient for the convergence of Feynman integrals. (orig.) [de

  4. A divergence theorem for pseudo-Finsler spaces

    Minguzzi, E.

    2015-01-01

    We study the divergence theorem on pseudo-Finsler spaces and obtain a completely Finslerian version for spaces having a vanishing mean Cartan torsion. This result helps to clarify the problem of energy-momentum conservation in Finsler gravity theories.

  5. The Weinberg-Witten theorem on massless particles: an essay

    Loebbert, F.

    2008-01-01

    In this essay we deal with the Weinberg-Witten theorem which imposes limitations on massless particles. First we motivate a classification of massless particles given by the Poincare group as the symmetry group of Minkowski spacetime. We then use the fundamental structure of the background in the form of Poincare covariance to derive restrictions on charged massless particles known as the Weinberg-Witten theorem. We address possible misunderstandings in the proof of this theorem motivated by several papers on this topic. In the last section the consequences of the theorem are discussed. We treat it in the context of known particles and as a constraint for emergent theories. (Abstract Copyright [2008], Wiley Periodicals, Inc.)

  6. Integrable equations, addition theorems, and the Riemann-Schottky problem

    Buchstaber, Viktor M; Krichever, I M

    2006-01-01

    The classical Weierstrass theorem claims that, among the analytic functions, the only functions admitting an algebraic addition theorem are the elliptic functions and their degenerations. This survey is devoted to far-reaching generalizations of this result that are motivated by the theory of integrable systems. The authors discovered a strong form of the addition theorem for theta functions of Jacobian varieties, and this form led to new approaches to known problems in the geometry of Abelian varieties. It is shown that strong forms of addition theorems arise naturally in the theory of the so-called trilinear functional equations. Diverse aspects of the approaches suggested here are discussed, and some important open problems are formulated.

  7. A priori knowledge and the Kochen-Specker theorem

    Brunet, Olivier

    2007-01-01

    We introduce and formalize a notion of 'a priori knowledge' about a quantum system, and show some properties about this form of knowledge. Finally, we show that the Kochen-Specker theorem follows directly from this study

  8. Supersymmetric extension of the Adler-Bardeen theorem

    Novikov, V.A.; Zakharov, V.I.; Shifman, M.A.; Vainshtein, A.I.

    1985-01-01

    A supersymmetric generalization of the Adler-Bardeen theorem in SUSY gauge theories is given. We show that within the Adler-Bardeen procedure, both the conformal and axial anomalies are exhausted by one loop. (orig.)

  9. An Elementary Proof of the Polynomial Matrix Spectral Factorization Theorem

    Ephremidze, Lasha

    2010-01-01

    A very simple and short proof of the polynomial matrix spectral factorization theorem (on the unit circle as well as on the real line) is presented, which relies on elementary complex analysis and linear algebra.

  10. Perron–Frobenius theorem for nonnegative multilinear forms and extensions

    Friedland, S.; Gaubert, S.; Han, L.

    2013-01-01

    We prove an analog of Perron-Frobenius theorem for multilinear forms with nonnegative coefficients, and more generally, for polynomial maps with nonnegative coefficients. We determine the geometric convergence rate of the power algorithm to the unique normalized eigenvector.

  11. Quantum nonlocality and reality 50 years of Bell's theorem

    Gao, Shan

    2016-01-01

    Description Contents Resources Courses About the Authors Combining twenty-six original essays written by an impressive line-up of distinguished physicists and philosophers of physics, this anthology reflects some of the latest thoughts by leading experts on the influence of Bell's theorem on quantum physics. Essays progress from John Bell's character and background, through studies of his main work, and on to more speculative ideas, addressing the controversies surrounding the theorem, and investigating the theorem's meaning and its deep implications for the nature of physical reality. Combined, they present a powerful comment on the undeniable significance of Bell's theorem for the development of ideas in quantum physics over the past 50 years. Questions surrounding the assumptions and significance of Bell's work still inspire discussion in the field of quantum physics. Adding to this with a theoretical and philosophical perspective, this balanced anthology is an indispensable volume for students and researc...

  12. An imbedding theorem and its applications in degenerate elliptic equations

    Duong Minh Duc.

    1988-06-01

    We improve the Rellich-Kondrachov theorem and apply it to study strongly degenerate and singular elliptic equations. We obtain the maximum principle, Harnacks's inequality and global regularity for solutions of those equations. (author). 11 refs

  13. Pragmatic, consensus-based minimum standards and structured interview to guide the selection and development of cancer support group leaders: a protocol paper.

    Pomery, Amanda; Schofield, Penelope; Xhilaga, Miranda; Gough, Karla

    2017-06-30

    Across the globe, peer support groups have emerged as a community-led approach to accessing support and connecting with others with cancer experiences. Little is known about qualities required to lead a peer support group or how to determine suitability for the role. Organisations providing assistance to cancer support groups and their leaders are currently operating independently, without a standard national framework or published guidelines. This protocol describes the methods that will be used to generate pragmatic consensus-based minimum standards and an accessible structured interview with user manual to guide the selection and development of cancer support group leaders. We will: (A) identify and collate peer-reviewed literature that describes qualities of support group leaders through a systematic review; (B) content analyse eligible documents for information relevant to requisite knowledge, skills and attributes of group leaders generally and specifically to cancer support groups; (C) use an online reactive Delphi method with an interdisciplinary panel of experts to produce a clear, suitable, relevant and appropriate structured interview comprising a set of agreed questions with behaviourally anchored rating scales; (D) produce a user manual to facilitate standard delivery of the structured interview; (E) pilot the structured interview to improve clinical utility; and (F) field test the structured interview to develop a rational scoring model and provide a summary of existing group leader qualities. The study is approved by the Department Human Ethics Advisory Group of The University of Melbourne. The study is based on voluntary participation and informed written consent, with participants able to withdraw at any time. The results will be disseminated at research conferences and peer review journals. Presentations and free access to the developed structured interview and user manual will be available to cancer agencies. © Article author(s) (or their

  14. Quantum work fluctuation theorem: Nonergodic Brownian motion case

    Bai, Zhan-Wu

    2014-01-01

    The work fluctuations of a quantum Brownian particle driven by an external force in a general nonergodic heat bath are studied under a general initial state. The exact analytical expression of the work probability distribution function is derived. Results show the existence of a quantum asymptotic fluctuation theorem, which is in general not a direct generalization of its classical counterpart. The form of this theorem is dependent on the structure of the heat bath and the specified initial condition.

  15. Probability densities and the radon variable transformation theorem

    Ramshaw, J.D.

    1985-01-01

    D. T. Gillespie recently derived a random variable transformation theorem relating to the joint probability densities of functionally dependent sets of random variables. The present author points out that the theorem can be derived as an immediate corollary of a simpler and more fundamental relation. In this relation the probability density is represented as a delta function averaged over an unspecified distribution of unspecified internal random variables. The random variable transformation is derived from this relation

  16. A short list color proof of Grotzsch's theorem

    Thomassen, Carsten

    2000-01-01

    We give a short proof of the result that every planar graph of girth $5$is $3$-choosable and hence also of Gr\\"{o}tzsch's theorem saying that everyplanar triangle-free graph is $3$-colorable.......We give a short proof of the result that every planar graph of girth $5$is $3$-choosable and hence also of Gr\\"{o}tzsch's theorem saying that everyplanar triangle-free graph is $3$-colorable....

  17. Locally Hamiltonian systems with symmetry and a generalized Noether's theorem

    Carinena, J.F.; Ibort, L.A.

    1985-01-01

    An analysis of global aspects of the theory of symmetry groups G of locally Hamiltonian dynamical systems is carried out for particular cases either of the symmetry group, or the differentiable manifold M supporting the symplectic structure, or the action of G on M. In every case it is obtained a generalization of Noether's theorem. It has been looked at the classical Noether's theorem for Lagrangian systems from a modern perspective

  18. Metrical theorems on systems of small inhomogeneous linear forms

    Hussain, Mumtaz; Kristensen, Simon

    In this paper we establish complete Khintchine-Groshev and Schmidt type theorems for inhomogeneous small linear forms in the so-called doubly metric case, in which the inhomogeneous parameter is not fixed.......In this paper we establish complete Khintchine-Groshev and Schmidt type theorems for inhomogeneous small linear forms in the so-called doubly metric case, in which the inhomogeneous parameter is not fixed....

  19. Extension and reconstruction theorems for the Urysohn universal metric space

    Kubiś, Wieslaw; Rubin, M.

    2010-01-01

    Roč. 60, č. 1 (2010), s. 1-29 ISSN 0011-4642 R&D Projects: GA AV ČR IAA100190901 Institutional research plan: CEZ:AV0Z10190503 Keywords : Urysohn space * bilipschitz homeomorphism * modulus of continuity * reconstruction theorem * extension theorem Subject RIV: BA - General Mathematics Impact factor: 0.265, year: 2010 http://dml.cz/handle/10338.dmlcz/140544

  20. A New Simple Approach for Entropy and Carnot Theorem

    Veliev, E. V.

    2004-01-01

    Entropy and Carnot theorem occupy central place in the typical Thermodynamics courses at the university level. In this work, we suggest a new simple approach for introducing the concept of entropy. Using simple procedure in TV plane, we proved that for reversible processes ∫dQ/T=0 and it is sufficient to define entropy. And also, using reversible processes in TS plane, we give an alternative simple proof for Carnot theorem

  1. On the c-theorem in higher genus

    Espriu, D.; Mavromatos, N.E.

    1990-01-01

    We study the extension of the c-therorem to arbitrary genus Riemann surfaces. We analyze the breakdown of conformal invariance caused by the need of cutting off regions of moduli space to regulate divergences and argue how these can be absorbed in the bare couplings on the sphere. An extension of the c-theorem then follows. We also discuss the relationship between the c-theorem and the effective action when corrections from higher genera are accounted for. (orig.)

  2. Some functional limit theorems for compound Cox processes

    Korolev, Victor Yu. [Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow (Russian Federation); Institute of Informatics Problems FRC CSC RAS (Russian Federation); Chertok, A. V. [Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow (Russian Federation); Euphoria Group LLC (Russian Federation); Korchagin, A. Yu. [Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow (Russian Federation); Kossova, E. V. [Higher School of Economics National Research University, Moscow (Russian Federation); Zeifman, Alexander I. [Vologda State University, S.Orlova, 6, Vologda (Russian Federation); Institute of Informatics Problems FRC CSC RAS, ISEDT RAS (Russian Federation)

    2016-06-08

    An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.

  3. Some functional limit theorems for compound Cox processes

    Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.

    2016-01-01

    An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.

  4. Cosmological constant, inflation and no-cloning theorem

    Huang Qingguo, E-mail: huangqg@itp.ac.cn [State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Science, Beijing 100190 (China); Lin Fengli, E-mail: linfengli@phy.ntnu.edu.tw [Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Department of Physics, National Taiwan Normal University, Taipei, 116, Taiwan (China)

    2012-05-30

    From the viewpoint of no-cloning theorem we postulate a relation between the current accelerated expansion of our universe and the inflationary expansion in the very early universe. It implies that the fate of our universe should be in a state with accelerated expansion. Quantitatively we find that the no-cloning theorem leads to a lower bound on the cosmological constant which is compatible with observations.

  5. The Hellman-Feynman theorem at finite temperature

    Cabrera, A.; Calles, A.

    1990-01-01

    The possibility of a kind of Hellman-Feynman theorem at finite temperature is discussed. Using the cannonical ensembles, the derivative of the internal energy is obtained when it depends explicitly on a parameter. It is found that under the low temperature regime the derivative of the energy can be obtained as the statistical average of the derivative of the hamiltonian operator. The result allows to speak of the existence of the Hellman-Feynman theorem at finite temperatures (Author)

  6. Generalized Perron--Frobenius Theorem for Nonsquare Matrices

    Avin, Chen; Borokhovich, Michael; Haddad, Yoram; Kantor, Erez; Lotker, Zvi; Parter, Merav; Peleg, David

    2013-01-01

    The celebrated Perron--Frobenius (PF) theorem is stated for irreducible nonnegative square matrices, and provides a simple characterization of their eigenvectors and eigenvalues. The importance of this theorem stems from the fact that eigenvalue problems on such matrices arise in many fields of science and engineering, including dynamical systems theory, economics, statistics and optimization. However, many real-life scenarios give rise to nonsquare matrices. A natural question is whether the...

  7. Generalized Panofsky-Wenzel theorem and hybrid coupling

    Smirnov, A V

    2001-01-01

    The Panofsky-Wenzel theorem is reformulated for the case in which phase slippage between the wave and beam is not negligible. The extended theorem can be applied in analysis of detuned waveguides, RF injectors, bunchers, some tapered waveguides or high-power sources and multi-cell cavities for dipole and higher order modes. As an example, the relative contribution of the Lorentz' component of the deflecting force is calculated for a conventional circular disk-loaded waveguide.

  8. On the first case of Fermat's theorem for cyclotomic fields

    Kolyvagin, V A

    1999-01-01

    The classical criteria of Kummer, Mirimanov and Vandiver for the validity of the first case of Fermat's theorem for the field Q of rationals and prime exponent l are generalized to the field Q( l √1) and exponent l. As a consequence, some simpler criteria are established. For example, the validity of the first case of Fermat's theorem is proved for the field Q( l √1) and exponent l on condition that l 2 does not divide 2 l -2

  9. The large deviation principle and steady-state fluctuation theorem for the entropy production rate of a stochastic process in magnetic fields

    Chen, Yong; Ge, Hao; Xiong, Jie; Xu, Lihu

    2016-01-01

    Fluctuation theorem is one of the major achievements in the field of nonequilibrium statistical mechanics during the past two decades. There exist very few results for steady-state fluctuation theorem of sample entropy production rate in terms of large deviation principle for diffusion processes due to the technical difficulties. Here we give a proof for the steady-state fluctuation theorem of a diffusion process in magnetic fields, with explicit expressions of the free energy function and rate function. The proof is based on the Karhunen-Loève expansion of complex-valued Ornstein-Uhlenbeck process.

  10. H–J–B Equations of Optimal Consumption-Investment and Verification Theorems

    Nagai, Hideo, E-mail: nagaih@kansai-u.ac.jp [Kansai University, Department of Mathematics, Faculty of Engineering Science (Japan)

    2015-04-15

    We consider a consumption-investment problem on infinite time horizon maximizing discounted expected HARA utility for a general incomplete market model. Based on dynamic programming approach we derive the relevant H–J–B equation and study the existence and uniqueness of the solution to the nonlinear partial differential equation. By using the smooth solution we construct the optimal consumption rate and portfolio strategy and then prove the verification theorems under certain general settings.

  11. Existence and convergence theorems for evolutionary hemivariational inequalities of second order

    Zijia Peng

    2015-03-01

    Full Text Available This article concerns with a class of evolutionary hemivariational inequalities in the framework of evolution triple. Based on the Rothe method, monotonicity-compactness technique and the properties of Clarke's generalized derivative and gradient, the existence and convergence theorems to these problems are established. The main idea in the proof is using the time difference to construct the approximate problems. The work generalizes the existence results on evolution inclusions and hemivariational inequalities of second order.

  12. A note about Norbert Wiener and his contribution to Harmonic Analysis and Tauberian Theorems

    Almira, J. M.; Romero, A. E.

    2009-05-01

    In this note we explain the main motivations Norbert Wiener had for the creation of his Generalized Harmonic Analysis [13] and his Tauberian Theorems [14]. Although these papers belong to the most pure mathematical tradition, they were deeply based on some Engineering and Physics Problems and Wiener was able to use them for such diverse areas as Optics, Brownian motion, Filter Theory, Prediction Theory and Cybernetics.

  13. The Baetylus Theorem?the central disconnect driving consumer behavior and investment returns in Wearable Technologies

    Levine, James A.

    2016-01-01

    The Wearable Technology market may increase fivefold by the end of the decade. There is almost no academic investigation as to what drives the investment hypothesis in wearable technologies. This paper seeks to examine this issue from an evidence-based perspective. There is a fundamental disconnect in how consumers view wearable sensors and how companies market them; this is called The Baetylus Theorem where people believe (falsely) that by buying a wearable sensor they will receive health be...

  14. Positive Solutions for Fractional Differential Equations from Real Estate Asset Securitization via New Fixed Point Theorem

    Hao Tao

    2012-01-01

    analysis of real estate asset securitization by using the generalized fixed point theorem for weakly contractive mappings in partially ordered sets. Based on the analysis for the existence and uniqueness of the solution and scientific numerical calculation of the solution, in further study, some optimization schemes for traditional risk control process will be obtained, and then the main results of this paper can be applied to the forefront of research of real estate asset securitization.

  15. H–J–B Equations of Optimal Consumption-Investment and Verification Theorems

    Nagai, Hideo

    2015-01-01

    We consider a consumption-investment problem on infinite time horizon maximizing discounted expected HARA utility for a general incomplete market model. Based on dynamic programming approach we derive the relevant H–J–B equation and study the existence and uniqueness of the solution to the nonlinear partial differential equation. By using the smooth solution we construct the optimal consumption rate and portfolio strategy and then prove the verification theorems under certain general settings

  16. Minimum Error Entropy Classification

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  17. Optimization of memory use of fragment extension-based protein-ligand docking with an original fast minimum cost flow algorithm.

    Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka

    2018-03-16

    The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Virtual continuity of the measurable functions of several variables, and Sobolev embedding theorems

    Vershik, Anatoly; Zatitskiy, Pavel; Petrov, Fedor

    2013-01-01

    Classical Luzin's theorem states that the measurable function of one variable is "almost" continuous. This is not so anymore for functions of several variables. The search of right analogue of the Luzin theorem leads to a notion of virtually continuous functions of several variables. This probably new notion appears implicitly in the statements like embeddings theorems and traces theorems for Sobolev spaces. In fact, it reveals their nature as theorems about virtual continuity. This notion is...

  19. Do Minimum Wages Fight Poverty?

    David Neumark; William Wascher

    1997-01-01

    The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...

  20. Analytical study of bound states in graphene nanoribbons and carbon nanotubes: The variable phase method and the relativistic Levinson theorem

    Miserev, D. S., E-mail: d.miserev@student.unsw.edu.au, E-mail: erazorheader@gmail.com [University of New South Wales, School of Physics (Australia)

    2016-06-15

    The problem of localized states in 1D systems with a relativistic spectrum, namely, graphene stripes and carbon nanotubes, is studied analytically. The bound state as a superposition of two chiral states is completely described by their relative phase, which is the foundation of the variable phase method (VPM) developed herein. Based on our VPM, we formulate and prove the relativistic Levinson theorem. The problem of bound states can be reduced to the analysis of closed trajectories of some vector field. Remarkably, the Levinson theorem appears as the Poincaré index theorem for these closed trajectories. The VPM equation is also reduced to the nonrelativistic and semiclassical limits. The limit of a small momentum p{sub y} of transverse quantization is applicable to an arbitrary integrable potential. In this case, a single confined mode is predicted.