Elastic energy of liquid crystals in convex polyhedra
International Nuclear Information System (INIS)
Majumdar, A; Robbins, J M; Zyskin, M
2004-01-01
We consider nematic liquid crystals in a bounded, convex polyhedron described by a director field n(r) subject to tangent boundary conditions. We derive lower bounds for the one-constant elastic energy in terms of topological invariants. For a right rectangular prism and a large class of topologies, we derive upper bounds by introducing test configurations constructed from local conformal solutions of the Euler-Lagrange equation. The ratio of the upper and lower bounds depends only on the aspect ratios of the prism. As the aspect ratios are varied, the minimum-energy conformal state undergoes a sharp transition from being smooth to having singularities on the edges. (letter to the editor)
International Nuclear Information System (INIS)
Grigoletto, T.; Lordello, A.R.
1984-01-01
A spectrographic method is described for the quantitative determination of dysprosium in doped crystals of calcium sulphate. The consequences of the changes in some parameters of the excitation conditions, such as arc current, electrode type and total or partial burning of sample, in the analytical results are discussed. Matrix effects are investigated. Variations in the intensity of the spectral lines are verified by recording the spectrum in distinct photographic plates. The role of internal standard in analytical reproducibility and in counterbalance of the variations in the arc current and in the weight of sample is studied. Accuracy is estimated by comparative analysis of two calcium sulphate samples by X-Ray Fluorescence, Neutron Activation and Inductive Coupled Plasma Emission Spectroscopy. (M.A.C.) [pt
International Nuclear Information System (INIS)
Sellick, B.O.
1976-01-01
Lithium fluoride as received from the vendor in boule form is 38 x 38 x 13 mm thick. This block is cleaved to wafers of the desired thickness, x-ray-evaluated for ''d'' spacing and greatest intensity, bent to the required radius, and then acid-etched to remove foreign material. The diffraction and dispersion characteristics of a wafer are analyzed using well-collimated tungsten x rays that strike the crystal and are diffracted onto no-screen x-ray film. If the crystal is satisfactory, it is mounted in a spectrogoniometer and rotated through an x-ray beam while a detector is set at the optimized angle for the diffracted x rays. The average intensity across the length of the crystal is recorded by multichannel scaling. Any imperfections appear as peaks or dips compared to the average intensity. The crystal next goes to a 10-channel, filter-fluorescer x-ray unit that compares zero-order intensity to diffracted Kα and Kβ intensity. Counts for 100-s intervals are taken in groups of three and averaged. Correction factors for instrument geometry, air, pinhole diameter at zero order, Kα-Kβ, barometric pressure, temperature, etc., are added to the efficiency calculations to obtain the crystal efficiency (epsilon) vs keV data. The crystal is mounted in the spectrograph or spectrometer and calibrated to either the detector or film plane by using direct radiation with proper x-ray filters or absorbers. The crystal is then ready for use
DEFF Research Database (Denmark)
M. Gaspar, Raquel; Murgoci, Agatha
2010-01-01
A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations...
DEFF Research Database (Denmark)
Lauritzen, Niels
-Motzkin elimination, the theory is developed by introducing polyhedra, the double description method and the simplex algorithm, closed convex subsets, convex functions of one and several variables ending with a chapter on convex optimization with the Karush-Kuhn-Tucker conditions, duality and an interior point......Based on undergraduate teaching to students in computer science, economics and mathematics at Aarhus University, this is an elementary introduction to convex sets and convex functions with emphasis on concrete computations and examples. Starting from linear inequalities and Fourier...
DEFF Research Database (Denmark)
Lauritzen, Niels
Based on undergraduate teaching to students in computer science, economics and mathematics at Aarhus University, this is an elementary introduction to convex sets and convex functions with emphasis on concrete computations and examples. Starting from linear inequalities and Fourier-Motzkin elimin......Based on undergraduate teaching to students in computer science, economics and mathematics at Aarhus University, this is an elementary introduction to convex sets and convex functions with emphasis on concrete computations and examples. Starting from linear inequalities and Fourier......-Motzkin elimination, the theory is developed by introducing polyhedra, the double description method and the simplex algorithm, closed convex subsets, convex functions of one and several variables ending with a chapter on convex optimization with the Karush-Kuhn-Tucker conditions, duality and an interior point...... algorithm....
An energy-stable convex splitting for the phase-field crystal equation
Vignal, P.; Dalcin, L.; Brown, D. L.; Collier, N.; Calo, V. M.
2015-01-01
Abstract The phase-field crystal equation, a parabolic, sixth-order and nonlinear partial differential equation, has generated considerable interest as a possible solution to problems arising in molecular dynamics. Nonetheless, solving this equation is not a trivial task, as energy dissipation and mass conservation need to be verified for the numerical solution to be valid. This work addresses these issues, and proposes a novel algorithm that guarantees mass conservation, unconditional energy stability and second-order accuracy in time. Numerical results validating our proofs are presented, and two and three dimensional simulations involving crystal growth are shown, highlighting the robustness of the method. © 2015 Elsevier Ltd.
An energy-stable convex splitting for the phase-field crystal equation
Vignal, P.
2015-10-01
Abstract The phase-field crystal equation, a parabolic, sixth-order and nonlinear partial differential equation, has generated considerable interest as a possible solution to problems arising in molecular dynamics. Nonetheless, solving this equation is not a trivial task, as energy dissipation and mass conservation need to be verified for the numerical solution to be valid. This work addresses these issues, and proposes a novel algorithm that guarantees mass conservation, unconditional energy stability and second-order accuracy in time. Numerical results validating our proofs are presented, and two and three dimensional simulations involving crystal growth are shown, highlighting the robustness of the method. © 2015 Elsevier Ltd.
Rockafellar, Ralph Tyrell
2015-01-01
Available for the first time in paperback, R. Tyrrell Rockafellar's classic study presents readers with a coherent branch of nonlinear mathematical analysis that is especially suited to the study of optimization problems. Rockafellar's theory differs from classical analysis in that differentiability assumptions are replaced by convexity assumptions. The topics treated in this volume include: systems of inequalities, the minimum or maximum of a convex function over a convex set, Lagrange multipliers, minimax theorems and duality, as well as basic results about the structure of convex sets and
DEFF Research Database (Denmark)
Lauritzen, Niels
Based on undergraduate teaching to students in computer science, economics and mathematics at Aarhus University, this is an elementary introduction to convex sets and convex functions with emphasis on concrete computations and examples. Starting from linear inequalities and Fourier-Motzkin elimin...
International Nuclear Information System (INIS)
Quinn, C.A.
1983-01-01
The article deals with spectrographic analysis and the analytical methods based on it. The theory of spectrographic analysis is discussed as well as the layout of a spectrometer system. The infrared absorption spectrum of a compound is probably its most unique property. The absorption of infrared radiation depends on increasing the energy of vibration and rotation associated with a covalent bond. The infrared region is intrinsically low in energy thus the design of infrared spectrometers is always directed toward maximising energy throughput. The article also considers atomic absorption - flame atomizers, non-flame atomizers and the source of radiation. Under the section an emission spectroscopy non-electrical energy sources, electrical energy sources and electrical flames are discussed. Digital computers form a part of the development on spectrographic instrumentation
Busemann, Herbert
2008-01-01
This exploration of convex surfaces focuses on extrinsic geometry and applications of the Brunn-Minkowski theory. It also examines intrinsic geometry and the realization of intrinsic metrics. 1958 edition.
Curved VPH gratings for novel spectrographs
Clemens, J. Christopher; O'Donoghue, Darragh; Dunlap, Bart H.
2014-07-01
The introduction of volume phase holographic (VPH) gratings into astronomy over a decade ago opened new possibilities for instrument designers. In this paper we describe an extension of VPH grating technology that will have applications in astronomy and beyond: curved VPH gratings. These devices can disperse light while simultaneously correcting aberrations. We have designed and manufactured two different kinds of convex VPH grating prototypes for use in off-axis reflecting spectrographs. One type functions in transmission and the other in reflection, enabling Offnerstyle spectrographs with the high-efficiency and low-cost advantages of VPH gratings. We will discuss the design process and the tools required for modelling these gratings along with the recording layout and process steps required to fabricate them. We will present performance data for the first convex VPH grating produced for an astronomical spectrograph.
Klee, Victor; Ziegler, Günter
2003-01-01
"The appearance of Grünbaum's book Convex Polytopes in 1967 was a moment of grace to geometers and combinatorialists. The special spirit of the book is very much alive even in those chapters where the book's immense influence made them quickly obsolete. Some other chapters promise beautiful unexplored land for future research. The appearance of the new edition is going to be another moment of grace. Kaibel, Klee and Ziegler were able to update the convex polytope saga in a clear, accurate, lively, and inspired way." (Gil Kalai, The Hebrew University of Jerusalem) "The original book of Grünbaum has provided the central reference for work in this active area of mathematics for the past 35 years...I first consulted this book as a graduate student in 1967; yet, even today, I am surprised again and again by what I find there. It is an amazingly complete reference for work on this subject up to that time and continues to be a major influence on research to this day." (Louis J. Billera, Cornell University) "The or...
International Nuclear Information System (INIS)
Yaakobi, B.; Burek, A.J.
1983-01-01
The report is arranged in five major sections, Section II describes the measurements of mica and lithium fluoride crystal properties before and after the cylindrical bending required for a Von-Hamos spectrograph. It also describes the property of mosaic focussing and the measurements of the spatial as well as spectral resolutions of bent crystals. Section III describes the imaging calculations which relate the instrument focussing capability to source misalignment. These calculations demonstrate the necessity to maintain fabrication and alignment precision which is about equal to the radiation source size, if the full potential of the instrument is to be realized. Section IV shows x-ray spectra obtained on the OMEGA 24 laser facility at LLE. The targets used were plastic shells, coated with copper either on the outside or the inside surface, germania shells, and krytpon-filled glass shells. The data indicate deeper heat penetration on the target surface, than predicted by a flux-limited heat transport model. In Section V, we list new spectral lines involving multiple electron excitation, which are observed here for the first time and whose wavelengths are calculated using Hartrer-Fock methods
On Convex Quadratic Approximation
den Hertog, D.; de Klerk, E.; Roos, J.
2000-01-01
In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of
Scott, Paul
2006-01-01
A "convex" polygon is one with no re-entrant angles. Alternatively one can use the standard convexity definition, asserting that for any two points of the convex polygon, the line segment joining them is contained completely within the polygon. In this article, the author provides a solution to a problem involving convex lattice polygons.
Aichholzer, Oswin; Aurenhammer, Franz; Hurtado Díaz, Fernando Alfredo; Ramos, Pedro A.; Urrutia, J.
2009-01-01
We introduce a notion of k-convexity and explore some properties of polygons that have this property. In particular, 2-convex polygons can be recognized in O(n log n) time, and k-convex polygons can be triangulated in O(kn) time.
Colesanti, Andrea; Gronchi, Paolo
2018-01-01
This book presents the proceedings of the international conference Analytic Aspects in Convexity, which was held in Rome in October 2016. It offers a collection of selected articles, written by some of the world’s leading experts in the field of Convex Geometry, on recent developments in this area: theory of valuations; geometric inequalities; affine geometry; and curvature measures. The book will be of interest to a broad readership, from those involved in Convex Geometry, to those focusing on Functional Analysis, Harmonic Analysis, Differential Geometry, or PDEs. The book is a addressed to PhD students and researchers, interested in Convex Geometry and its links to analysis.
van de Vel, MLJ
1993-01-01
Presented in this monograph is the current state-of-the-art in the theory of convex structures. The notion of convexity covered here is considerably broader than the classic one; specifically, it is not restricted to the context of vector spaces. Classical concepts of order-convex sets (Birkhoff) and of geodesically convex sets (Menger) are directly inspired by intuition; they go back to the first half of this century. An axiomatic approach started to develop in the early Fifties. The author became attracted to it in the mid-Seventies, resulting in the present volume, in which graphs appear si
Convexity and Marginal Vectors
van Velzen, S.; Hamers, H.J.M.; Norde, H.W.
2002-01-01
In this paper we construct sets of marginal vectors of a TU game with the property that if the marginal vectors from these sets are core elements, then the game is convex.This approach leads to new upperbounds on the number of marginal vectors needed to characterize convexity.An other result is that
Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.
2008-01-01
In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for
DEFF Research Database (Denmark)
Jacob, Riko
We determine the computational complexity of the dynamic convex hull problem in the planar case. We present a data structure that maintains a finite set of n points in the plane under insertion and deletion of points in amortized O(log n) time per operation. The space usage of the data structure...... is O(n). The data structure supports extreme point queries in a given direction, tangent queries through a given point, and queries for the neighboring points on the convex hull in O(log n) time. The extreme point queries can be used to decide whether or not a given line intersects the convex hull......, and the tangent queries to determine whether a given point is inside the convex hull. The space usage of the data structure is O(n). We give a lower bound on the amortized asymptotic time complexity that matches the performance of this data structure....
Stereotype locally convex spaces
International Nuclear Information System (INIS)
Akbarov, S S
2000-01-01
We give complete proofs of some previously announced results in the theory of stereotype (that is, reflexive in the sense of Pontryagin duality) locally convex spaces. These spaces have important applications in topological algebra and functional analysis
Stereotype locally convex spaces
Energy Technology Data Exchange (ETDEWEB)
Akbarov, S S
2000-08-31
We give complete proofs of some previously announced results in the theory of stereotype (that is, reflexive in the sense of Pontryagin duality) locally convex spaces. These spaces have important applications in topological algebra and functional analysis.
Stereotype locally convex spaces
Akbarov, S. S.
2000-08-01
We give complete proofs of some previously announced results in the theory of stereotype (that is, reflexive in the sense of Pontryagin duality) locally convex spaces. These spaces have important applications in topological algebra and functional analysis.
Generalized Convexity and Inequalities
Anderson, G. D.; Vamanamurthy, M. K.; Vuorinen, M.
2007-01-01
Let R+ = (0,infinity) and let M be the family of all mean values of two numbers in R+ (some examples are the arithmetic, geometric, and harmonic means). Given m1, m2 in M, we say that a function f : R+ to R+ is (m1,m2)-convex if f(m1(x,y)) < or = m2(f(x),f(y)) for all x, y in R+ . The usual convexity is the special case when both mean values are arithmetic means. We study the dependence of (m1,m2)-convexity on m1 and m2 and give sufficient conditions for (m1,m2)-convexity of functions defined...
Improved Emission Spectrographic Facility
International Nuclear Information System (INIS)
Goergen, C.R.; Lethco, A.J.; Hosken, G.B.; Geckeler, D.R.
1980-10-01
The Savannah River Plant's original Emission Spectrographic Laboratory for radioactive samples had been in operation for 25 years. Due to the deteriorated condition and the fire hazard posed by the wooden glove box trains, a project to update the facility was funded. The new laboratory improved efficiency of operation and incorporated numerous safety and contamination control features
DEFF Research Database (Denmark)
Brodal, Gerth Stølfting; Jacob, Rico
2002-01-01
In this paper we determine the computational complexity of the dynamic convex hull problem in the planar case. We present a data structure that maintains a finite set of n points in the plane under insertion and deletion of points in amortized O(log n) time per operation. The space usage of the d......In this paper we determine the computational complexity of the dynamic convex hull problem in the planar case. We present a data structure that maintains a finite set of n points in the plane under insertion and deletion of points in amortized O(log n) time per operation. The space usage...... of the data structure is O(n). The data structure supports extreme point queries in a given direction, tangent queries through a given point, and queries for the neighboring points on the convex hull in O(log n) time. The extreme point queries can be used to decide whether or not a given line intersects...... the convex hull, and the tangent queries to determine whether a given point is inside the convex hull. We give a lower bound on the amortized asymptotic time complexity that matches the performance of this data structure....
Hörmander, Lars
1994-01-01
The first two chapters of this book are devoted to convexity in the classical sense, for functions of one and several real variables respectively. This gives a background for the study in the following chapters of related notions which occur in the theory of linear partial differential equations and complex analysis such as (pluri-)subharmonic functions, pseudoconvex sets, and sets which are convex for supports or singular supports with respect to a differential operator. In addition, the convexity conditions which are relevant for local or global existence of holomorphic differential equations are discussed, leading up to Trépreau’s theorem on sufficiency of condition (capital Greek letter Psi) for microlocal solvability in the analytic category. At the beginning of the book, no prerequisites are assumed beyond calculus and linear algebra. Later on, basic facts from distribution theory and functional analysis are needed. In a few places, a more extensive background in differential geometry or pseudodiffer...
Indian Academy of Sciences (India)
for all t E [0,1] and all x, y (in the domain of definition of f). ... Proof: (a) is a consequence of the definition. (b) Define conv(S) ... More generally, a set F is said to be a face of the convex .... and bounded, and assume the validity (for a proof, see.
Directory of Open Access Journals (Sweden)
Roger Koenker
2014-09-01
Full Text Available Convex optimization now plays an essential role in many facets of statistics. We briefly survey some recent developments and describe some implementations of these methods in R . Applications of linear and quadratic programming are introduced including quantile regression, the Huber M-estimator and various penalized regression methods. Applications to additively separable convex problems subject to linear equality and inequality constraints such as nonparametric density estimation and maximum likelihood estimation of general nonparametric mixture models are described, as are several cone programming problems. We focus throughout primarily on implementations in the R environment that rely on solution methods linked to R, like MOSEK by the package Rmosek. Code is provided in R to illustrate several of these problems. Other applications are available in the R package REBayes, dealing with empirical Bayes estimation of nonparametric mixture models.
Czech Academy of Sciences Publication Activity Database
Hrubeš, P.; Jukna, S.; Kulikov, A.; Pudlák, Pavel
2010-01-01
Roč. 411, 16-18 (2010), s. 1842-1854 ISSN 0304-3975 R&D Projects: GA AV ČR IAA1019401 Institutional research plan: CEZ:AV0Z10190503 Keywords : boolean formula * complexity measure * combinatorial rectangle * convexity Subject RIV: BA - General Mathematics Impact factor: 0.838, year: 2010 http://www.sciencedirect.com/science/article/pii/S0304397510000885
Subordination by convex functions
Directory of Open Access Journals (Sweden)
Rosihan M. Ali
2006-01-01
Full Text Available For a fixed analytic function g(z=z+∑n=2∞gnzn defined on the open unit disk and γ<1, let Tg(γ denote the class of all analytic functions f(z=z+∑n=2∞anzn satisfying ∑n=2∞|angn|≤1−γ. For functions in Tg(γ, a subordination result is derived involving the convolution with a normalized convex function. Our result includes as special cases several earlier works.
Convex games versus clan games
Brânzei, R.; Dimitrov, D.A.; Tijs, S.H.
2008-01-01
In this paper we provide characterizations of convex games and total clan games by using properties of their corresponding marginal games. We show that a "dualize and restrict" procedure transforms total clan games with zero worth for the clan into monotonic convex games. Furthermore, each monotonic
Convex Games versus Clan Games
Brânzei, R.; Dimitrov, D.A.; Tijs, S.H.
2006-01-01
In this paper we provide characterizations of convex games and total clan games by using properties of their corresponding marginal games.We show that a "dualize and restrict" procedure transforms total clan games with zero worth for the clan into monotonic convex games.Furthermore, each monotonic
Convexity Adjustments for ATS Models
DEFF Research Database (Denmark)
Murgoci, Agatha; Gaspar, Raquel M.
. As a result we classify convexity adjustments into forward adjustments and swaps adjustments. We, then, focus on affine term structure (ATS) models and, in this context, conjecture convexity adjustments should be related of affine functionals. In the case of forward adjustments, we show how to obtain exact...
Nested convex bodies are chaseable
N. Bansal (Nikhil); M. Böhm (Martin); M. Eliáš (Marek); G. Koumoutsos (Grigorios); S.W. Umboh (Seeun William)
2018-01-01
textabstractIn the Convex Body Chasing problem, we are given an initial point v0 2 Rd and an online sequence of n convex bodies F1; : : : ; Fn. When we receive Fi, we are required to move inside Fi. Our goal is to minimize the total distance traveled. This fundamental online problem was first
Using commercial amateur astronomical spectrographs
Hopkins, Jeffrey L
2014-01-01
Amateur astronomers interested in learning more about astronomical spectroscopy now have the guide they need. It provides detailed information about how to get started inexpensively with low-resolution spectroscopy, and then how to move on to more advanced high-resolution spectroscopy. Uniquely, the instructions concentrate very much on the practical aspects of using commercially-available spectroscopes, rather than simply explaining how spectroscopes work. The book includes a clear explanation of the laboratory theory behind astronomical spectrographs, and goes on to extensively cover the practical application of astronomical spectroscopy in detail. Four popular and reasonably-priced commercially available diffraction grating spectrographs are used as examples. The first is a low-resolution transmission diffraction grating, the Star Analyser spectrograph. The second is an inexpensive fiber optic coupled bench spectrograph that can be used to learn more about spectroscopy. The third is a newcomer, the ALPY ...
θ-convex nonlinear programming problems
International Nuclear Information System (INIS)
Emam, T.
2008-01-01
A class of sets and a class of functions called θ-convex sets and θ-convex functions are introduced by relaxing the definitions of convex sets and operator θ on the sets and domain of definition of the functions. The optimally results for θ-convex programming problems are established.
A class of free locally convex spaces
International Nuclear Information System (INIS)
Sipacheva, O V
2003-01-01
Stratifiable spaces are a natural generalization of metrizable spaces for which Dugundji's theorem holds. It is proved that the free locally convex space of a stratifiable space is stratifiable. This means, in particular, that the space of finitely supported probability measures on a stratifiable space is a retract of a locally convex space, and that each stratifiable convex subset of a locally convex space is a retract of a locally convex space
Princeton Cyclotron QDDD spectrograph system
International Nuclear Information System (INIS)
Kouzes, R.T.
1985-01-01
A review of experiments involving the Princeton Quadrupole-Dipole-Dipole- Dipole (QDDD) spectrograph is given. The QDDD is a high resolution, large solid angle device which is combined with the azymuthally varying field (AVF) cyclotron. Some reactions involving 3 He beams are discussed
Spectrographic analysis of plutonium (1960)
International Nuclear Information System (INIS)
Artaud, J.; Chaput, M.; Robichet, J.
1960-01-01
Various possibilities for the spectrographic determination of impurities in plutonium are considered. The application of the 'copper spark' method, of sparking on graphite and of fractional distillation in the arc are described and discussed in some detail (apparatus, accessories, results obtained). (author) [fr
Geometry of isotropic convex bodies
Brazitikos, Silouanos; Valettas, Petros; Vritsiou, Beatrice-Helen
2014-01-01
The study of high-dimensional convex bodies from a geometric and analytic point of view, with an emphasis on the dependence of various parameters on the dimension stands at the intersection of classical convex geometry and the local theory of Banach spaces. It is also closely linked to many other fields, such as probability theory, partial differential equations, Riemannian geometry, harmonic analysis and combinatorics. It is now understood that the convexity assumption forces most of the volume of a high-dimensional convex body to be concentrated in some canonical way and the main question is whether, under some natural normalization, the answer to many fundamental questions should be independent of the dimension. The aim of this book is to introduce a number of well-known questions regarding the distribution of volume in high-dimensional convex bodies, which are exactly of this nature: among them are the slicing problem, the thin shell conjecture and the Kannan-Lov�sz-Simonovits conjecture. This book prov...
A new convexity measure for polygons.
Zunic, Jovisa; Rosin, Paul L
2004-07-01
Abstract-Convexity estimators are commonly used in the analysis of shape. In this paper, we define and evaluate a new convexity measure for planar regions bounded by polygons. The new convexity measure can be understood as a "boundary-based" measure and in accordance with this it is more sensitive to measured boundary defects than the so called "area-based" convexity measures. When compared with the convexity measure defined as the ratio between the Euclidean perimeter of the convex hull of the measured shape and the Euclidean perimeter of the measured shape then the new convexity measure also shows some advantages-particularly for shapes with holes. The new convexity measure has the following desirable properties: 1) the estimated convexity is always a number from (0, 1], 2) the estimated convexity is 1 if and only if the measured shape is convex, 3) there are shapes whose estimated convexity is arbitrarily close to 0, 4) the new convexity measure is invariant under similarity transformations, and 5) there is a simple and fast procedure for computing the new convexity measure.
NP-completeness of weakly convex and convex dominating set decision problems
Directory of Open Access Journals (Sweden)
Joanna Raczek
2004-01-01
Full Text Available The convex domination number and the weakly convex domination number are new domination parameters. In this paper we show that the decision problems of convex and weakly convex dominating sets are \\(NP\\-complete for bipartite and split graphs. Using a modified version of Warshall algorithm we can verify in polynomial time whether a given subset of vertices of a graph is convex or weakly convex.
Nonsmooth Mechanics and Convex Optimization
Kanno, Yoshihiro
2011-01-01
"This book concerns matter that is intrinsically difficult: convex optimization, complementarity and duality, nonsmooth analysis, linear and nonlinear programming, etc. The author has skillfully introduced these and many more concepts, and woven them into a seamless whole by retaining an easy and consistent style throughout. The book is not all theory: There are many real-life applications in structural engineering, cable networks, frictional contact problems, and plasticity! I recommend it to any reader who desires a modern, authoritative account of nonsmooth mechanics and convex optimiz
Quantum information and convex optimization
International Nuclear Information System (INIS)
Reimpell, Michael
2008-01-01
This thesis is concerned with convex optimization problems in quantum information theory. It features an iterative algorithm for optimal quantum error correcting codes, a postprocessing method for incomplete tomography data, a method to estimate the amount of entanglement in witness experiments, and it gives necessary and sufficient criteria for the existence of retrodiction strategies for a generalized mean king problem. (orig.)
Czech Academy of Sciences Publication Activity Database
Guirao, A. J.; Hájek, Petr Pavel
2007-01-01
Roč. 135, č. 10 (2007), s. 3233-3240 ISSN 0002-9939 R&D Projects: GA AV ČR IAA100190502 Institutional research plan: CEZ:AV0Z10190503 Keywords : Banach spaces * moduli of convexity * uniformly rotund norms Subject RIV: BA - General Mathematics Impact factor: 0.520, year: 2007
Quantum information and convex optimization
Energy Technology Data Exchange (ETDEWEB)
Reimpell, Michael
2008-07-01
This thesis is concerned with convex optimization problems in quantum information theory. It features an iterative algorithm for optimal quantum error correcting codes, a postprocessing method for incomplete tomography data, a method to estimate the amount of entanglement in witness experiments, and it gives necessary and sufficient criteria for the existence of retrodiction strategies for a generalized mean king problem. (orig.)
Convexity of the effective potential
International Nuclear Information System (INIS)
Haymaker, R.W.; Perez-Mercader, J.
1978-01-01
The effective potential V(phi) in field theories is a convex function of phi. V(lambda phi 1 + (1 - lambda)phi 2 ) less than or equal to lambdaV(phi 1 ) + (1 - lambda)V(phi 2 ), 0 less than or equal to lambda less than or equal to 1, all phi 1 , phi 2 . A linear interpolation of V(phi) is always larger than or equal to V(phi). There are numerous examples in the tree approximation and in perturbation theory for which this is not the case, the most notorious example being the double dip potential. More complete solutions may or may not show this property automatically. However, a non-convex V(phi) simply indicates that an unstable vacuum state was used in implementing the definition of V(phi). A strict definition will instruct one to replace V(phi) with its linear interpolation in such a way as to make it convex. (Alternatively one can just as well take the view that V(phi) is undefined in these domains.) In this note, attention is called to a very simple argument for convexity based on a construction described by H. Callen in his classic book Thermodynamics
Computing farthest neighbors on a convex polytope
Cheong, O.; Shin, C.S.; Vigneron, A.
2002-01-01
Let N be a set of n points in convex position in R3. The farthest-point Voronoi diagram of N partitions R³ into n convex cells. We consider the intersection G(N) of the diagram with the boundary of the convex hull of N. We give an algorithm that computes an implicit representation of G(N) in
Spectrographic analysis of stainless steels
International Nuclear Information System (INIS)
Sabato, S.F.; Lordello, A.R.
1984-01-01
Two spectrogaphyic solution techniques, 'Porous Cup' and 'Vacuum Cup', were investigated in order to determine the minor constituents (Cr, Ni, Mo, Mn, Cu and V) of stainless steels. Iron and cobalt were experimented as internal standards. The precision varied from 4 to 11% for both spectrographic techniques, in which cobalt was used as international standard. Certified standards from National Bureau of Standards and Instituto de Pesquisas Tecnologicas were analysed to verify the accuracy of both techniques. The best accuracy was obtained with the Vacuum Cup techniques. (Author) [pt
A noncommutative convexity in C*-bimodules
Directory of Open Access Journals (Sweden)
Mohsen Kian
2017-02-01
Full Text Available Let A and B be C*-algebras. We consider a noncommutative convexity in Hilbert A-B-bimodules, called A-B-convexity, as a generalization of C*-convexity in C*-algebras. We show that if X is a Hilbert A-B-bimodule, then Mn(X is a Hilbert Mn(A-Mn(B-bimodule and apply it to show that the closed unit ball of every Hilbert A-B-bimodule is A-B-convex. Some properties of this kind of convexity and various examples have been given.
Quantum logics and convex geometry
International Nuclear Information System (INIS)
Bunce, L.J.; Wright, J.D.M.
1985-01-01
The main result is a representation theorem which shows that, for a large class of quantum logics, a quantum logic, Q, is isomorphic to the lattice of projective faces in a suitable convex set K. As an application we extend our earlier results, which, subject to countability conditions, gave a geometric characterization of those quantum logics which are isomorphic to the projection lattice of a von Neumann algebra or a JBW-algebra. (orig.)
Learning Convex Inference of Marginals
Domke, Justin
2012-01-01
Graphical models trained using maximum likelihood are a common tool for probabilistic inference of marginal distributions. However, this approach suffers difficulties when either the inference process or the model is approximate. In this paper, the inference process is first defined to be the minimization of a convex function, inspired by free energy approximations. Learning is then done directly in terms of the performance of the inference process at univariate marginal prediction. The main ...
Diameter 2 properties and convexity
Czech Academy of Sciences Publication Activity Database
Abrahamsen, T. A.; Hájek, Petr Pavel; Nygaard, O.; Talponen, J.; Troyanski, S.
2016-01-01
Roč. 232, č. 3 (2016), s. 227-242 ISSN 0039-3223 R&D Projects: GA ČR GA16-07378S Institutional support: RVO:67985840 Keywords : diameter 2 property * midpoint locally uniformly rotund * Daugavet property Subject RIV: BA - General Mathematics Impact factor: 0.535, year: 2016 https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/studia- mathematica /all/232/3/91534/diameter-2-properties-and-convexity
Spectrographic analysis of waste waters
International Nuclear Information System (INIS)
Alvarez Alduan, F.; Capdevila, C.
1979-01-01
The Influence of sodium and calcium, up to a maximum concentration of 1000 mg/1 Na and 300 mg/1 Ca, in the spectrographic determination of Cr, Cu, Fe,Mn and Pb in waste waters using graphite spark excitation has been studied. In order to eliminate this influence, each of the elements Ba, Cs, In, La, Li, Sr and Ti, as well as a mixture containing 5% Li-50% Ti, have been tested as spectrochemical buffers. This mixture allows to obtain an accuracy better than 25%. Sodium and calcium enhance the line intensities of impurities, when using graphite or gold electrodes, but they produce an opposite effect if copper or silver electrodes are used. (Author) 1 refs
Finite dimensional convexity and optimization
Florenzano, Monique
2001-01-01
The primary aim of this book is to present notions of convex analysis which constitute the basic underlying structure of argumentation in economic theory and which are common to optimization problems encountered in many applications. The intended readers are graduate students, and specialists of mathematical programming whose research fields are applied mathematics and economics. The text consists of a systematic development in eight chapters, with guided exercises containing sometimes significant and useful additional results. The book is appropriate as a class text, or for self-study.
Use of Convexity in Ostomy Care
Salvadalena, Ginger; Pridham, Sue; Droste, Werner; McNichol, Laurie; Gray, Mikel
2017-01-01
Ostomy skin barriers that incorporate a convexity feature have been available in the marketplace for decades, but limited resources are available to guide clinicians in selection and use of convex products. Given the widespread use of convexity, and the need to provide practical guidelines for appropriate use of pouching systems with convex features, an international consensus panel was convened to provide consensus-based guidance for this aspect of ostomy practice. Panelists were provided with a summary of relevant literature in advance of the meeting; these articles were used to generate and reach consensus on 26 statements during a 1-day meeting. Consensus was achieved when 80% of panelists agreed on a statement using an anonymous electronic response system. The 26 statements provide guidance for convex product characteristics, patient assessment, convexity use, and outcomes. PMID:28002174
Reconstruction of convex bodies from moments
DEFF Research Database (Denmark)
Hörrmann, Julia; Kousholt, Astrid
We investigate how much information about a convex body can be retrieved from a finite number of its geometric moments. We give a sufficient condition for a convex body to be uniquely determined by a finite number of its geometric moments, and we show that among all convex bodies, those which......- rithm that approximates a convex body using a finite number of its Legendre moments. The consistency of the algorithm is established using the stabil- ity result for Legendre moments. When only noisy measurements of Legendre moments are available, the consistency of the algorithm is established under...
Entropy coherent and entropy convex measures of risk
Laeven, R.J.A.; Stadje, M.
2013-01-01
We introduce two subclasses of convex measures of risk, referred to as entropy coherent and entropy convex measures of risk. Entropy coherent and entropy convex measures of risk are special cases of φ-coherent and φ-convex measures of risk. Contrary to the classical use of coherent and convex
Pluripotential theory and convex bodies
Bayraktar, T.; Bloom, T.; Levenberg, N.
2018-03-01
A seminal paper by Berman and Boucksom exploited ideas from complex geometry to analyze the asymptotics of spaces of holomorphic sections of tensor powers of certain line bundles L over compact, complex manifolds as the power grows. This yielded results on weighted polynomial spaces in weighted pluripotential theory in {C}^d. Here, motivated by a recent paper by the first author on random sparse polynomials, we work in the setting of weighted pluripotential theory arising from polynomials associated to a convex body in ({R}^+)^d. These classes of polynomials need not occur as sections of tensor powers of a line bundle L over a compact, complex manifold. We follow the approach of Berman and Boucksom to obtain analogous results. Bibliography: 16 titles.
Convex analysis and global optimization
Tuy, Hoang
2016-01-01
This book presents state-of-the-art results and methodologies in modern global optimization, and has been a staple reference for researchers, engineers, advanced students (also in applied mathematics), and practitioners in various fields of engineering. The second edition has been brought up to date and continues to develop a coherent and rigorous theory of deterministic global optimization, highlighting the essential role of convex analysis. The text has been revised and expanded to meet the needs of research, education, and applications for many years to come. Updates for this new edition include: · Discussion of modern approaches to minimax, fixed point, and equilibrium theorems, and to nonconvex optimization; · Increased focus on dealing more efficiently with ill-posed problems of global optimization, particularly those with hard constraints;
Characterizing Convexity of Games using Marginal Vectors
van Velzen, S.; Hamers, H.J.M.; Norde, H.W.
2003-01-01
In this paper we study the relation between convexity of TU games and marginal vectors.We show that if specfic marginal vectors are core elements, then the game is convex.We characterize sets of marginal vectors satisfying this property, and we derive the formula for the minimum number of marginal
Convex trace functions of several variables
DEFF Research Database (Denmark)
Hansen, Frank
2002-01-01
We prove that the function (x1,...,xk)¿Tr(f(x1,...,xk)), defined on k-tuples of symmetric matrices of order (n1,...,nk) in the domain of f, is convex for any convex function f of k variables. The matrix f(x1,...,xk) is defined by the functional calculus for functions of several variables, and it ...
Differential analysis of matrix convex functions II
DEFF Research Database (Denmark)
Hansen, Frank; Tomiyama, Jun
2009-01-01
We continue the analysis in [F. Hansen, and J. Tomiyama, Differential analysis of matrix convex functions. Linear Algebra Appl., 420:102--116, 2007] of matrix convex functions of a fixed order defined in a real interval by differential methods as opposed to the characterization in terms of divided...
Strictly convex functions on complete Finsler manifolds
Indian Academy of Sciences (India)
convex functions on the metric structures of complete Finsler manifolds. More precisely we discuss ... map expp at some point p ∈ M (and hence at every point on M) is defined on the whole tangent space Mp to M at ... The influence of the existence of convex functions on the metric and topology of under- lying manifolds has ...
Introduction to Convex and Quasiconvex Analysis
J.B.G. Frenk (Hans); G. Kassay
2004-01-01
textabstractIn the first chapter of this book the basic results within convex and quasiconvex analysis are presented. In Section 2 we consider in detail the algebraic and topological properties of convex sets within Rn together with their primal and dual representations. In Section 3 we apply the
Convexity of oligopoly games without transferable technologies
Driessen, Theo; Meinhardt, Holger I.
2005-01-01
We present sufficient conditions involving the inverse demand function and the cost functions to establish the convexity of oligopoly TU-games without transferable technologies. For convex TU-games it is well known that the core is relatively large and that it is generically nonempty. The former
Convex bodies with many elliptic sections
Arelio, Isaac; Montejano, Luis
2014-01-01
{We show in this paper that two normal elliptic sections through every point of the boundary of a smooth convex body essentially characterize an ellipsoid and furthermore, that four different pairwise non-tangent elliptic sections through every point of the $C^2$-differentiable boundary of a convex body also essentially characterize an ellipsoid.
Generalized convexity, generalized monotonicity recent results
Martinez-Legaz, Juan-Enrique; Volle, Michel
1998-01-01
A function is convex if its epigraph is convex. This geometrical structure has very strong implications in terms of continuity and differentiability. Separation theorems lead to optimality conditions and duality for convex problems. A function is quasiconvex if its lower level sets are convex. Here again, the geo metrical structure of the level sets implies some continuity and differentiability properties for quasiconvex functions. Optimality conditions and duality can be derived for optimization problems involving such functions as well. Over a period of about fifty years, quasiconvex and other generalized convex functions have been considered in a variety of fields including economies, man agement science, engineering, probability and applied sciences in accordance with the need of particular applications. During the last twenty-five years, an increase of research activities in this field has been witnessed. More recently generalized monotonicity of maps has been studied. It relates to generalized conve...
Two generalizations of column-convex polygons
International Nuclear Information System (INIS)
Feretic, Svjetlan; Guttmann, Anthony J
2009-01-01
Column-convex polygons were first counted by area several decades ago, and the result was found to be a simple, rational, generating function. In this work we generalize that result. Let a p-column polyomino be a polyomino whose columns can have 1, 2, ..., p connected components. Then column-convex polygons are equivalent to 1-convex polyominoes. The area generating function of even the simplest generalization, namely 2-column polyominoes, is unlikely to be solvable. We therefore define two classes of polyominoes which interpolate between column-convex polygons and 2-column polyominoes. We derive the area generating functions of those two classes, using extensions of existing algorithms. The growth constants of both classes are greater than the growth constant of column-convex polyominoes. Rather tight lower bounds on the growth constants complement a comprehensive asymptotic analysis.
NRES: The Network of Robotic Echelle Spectrographs
Siverd, Robert; Brown, Tim; Henderson, Todd; Hygelund, John; Barnes, Stuart; de Vera, Jon; Eastman, Jason; Kirby, Annie; Smith, Cary; Taylor, Brook; Tufts, Joseph; van Eyken, Julian
2018-01-01
Las Cumbres Observatory (LCO) is building the Network of Robotic Echelle Spectrographs (NRES), which will consist of four (up to six in the future) identical, optical (390 - 860 nm) high-precision spectrographs, each fiber-fed simultaneously by up to two 1-meter telescopes and a Thorium-Argon calibration source. We plan to install one at up to 6 observatory sites in the Northern and Southern hemispheres, creating a single, globally-distributed, autonomous spectrograph facility using up to ten 1-m telescopes. Simulations suggest we will achieve long-term radial velocity precision of 3 m/s in less than an hour for stars brighter than V = 11 or 12 once the system reaches full capability. Acting in concert, these four spectrographs will provide a new, unique facility for stellar characterization and precise radial velocities.Following a few months of on-sky evaluation at our BPL test facility, the first spectrograph unit was shipped to CTIO in late 2016 and installed in March 2017. After several more months of additional testing and commissioning, regular science operations began with this node in September 2017. The second NRES spectrograph was installed at McDonald Observatory in September 2017 and released to the network after its own brief commissioning period, extending spectroscopic capability to the Northern hemisphere. The third NRES spectrograph was installed at SAAO in November 2017 and released to our science community just before year's end. The fourth NRES unit shipped in October and is currently en route to Wise Observatory in Israel with an expected release to the science community in early 2018.We will briefly overview the LCO telescope network, the NRES spectrograph design, the advantages it provides, and development challenges we encountered along the way. We will further discuss real-world performance from our first three units, initial science results, and the ongoing software development effort needed to automate such a facility for a wide array of
Alpha-Concave Hull, a Generalization of Convex Hull
Asaeedi, Saeed; Didehvar, Farzad; Mohades, Ali
2013-01-01
Bounding hull, such as convex hull, concave hull, alpha shapes etc. has vast applications in different areas especially in computational geometry. Alpha shape and concave hull are generalizations of convex hull. Unlike the convex hull, they construct non-convex enclosure on a set of points. In this paper, we introduce another generalization of convex hull, named alpha-concave hull, and compare this concept with convex hull and alpha shape. We show that the alpha-concave hull is also a general...
Duality and calculus of convex objects (theory and applications)
International Nuclear Information System (INIS)
Brinkhuis, Ya; Tikhomirov, V M
2007-01-01
A new approach to convex calculus is presented, which allows one to treat from a single point of view duality and calculus for various convex objects. This approach is based on the possibility of associating with each convex object (a convex set or a convex function) a certain convex cone without loss of information about the object. From the duality theorem for cones duality theorems for other convex objects are deduced as consequences. The theme 'Duality formulae and the calculus of convex objects' is exhausted (from a certain precisely formulated point of view). Bibliography: 5 titles.
Convex sets in probabilistic normed spaces
International Nuclear Information System (INIS)
Aghajani, Asadollah; Nourouzi, Kourosh
2008-01-01
In this paper we obtain some results on convexity in a probabilistic normed space. We also investigate the concept of CSN-closedness and CSN-compactness in a probabilistic normed space and generalize the corresponding results of normed spaces
Designing Camera Networks by Convex Quadratic Programming
Ghanem, Bernard; Wonka, Peter; Cao, Yuanhao
2015-01-01
be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution
ON THE GENERALIZED CONVEXITY AND CONCAVITY
Directory of Open Access Journals (Sweden)
Bhayo B.
2015-11-01
Full Text Available A function ƒ : R+ → R+ is (m1, m2-convex (concave if ƒ(m1(x,y ≤ (≥ m2(ƒ(x, ƒ(y for all x,y Є R+ = (0,∞ and m1 and m2 are two mean functions. Anderson et al. [1] studies the dependence of (m1, m2-convexity (concavity on m1 and m2 and gave the sufficient conditions of (m1, m2-convexity and concavity of a function defined by Maclaurin series. In this paper, we make a contribution to the topic and study the (m1, m2-convexity and concavity of a function where m1 and m2 are identric mean, Alzer mean mean. As well, we prove a conjecture posed by Bruce Ebanks in [2].
On convexity and Schoenberg's variation diminishing splines
International Nuclear Information System (INIS)
Feng, Yuyu; Kozak, J.
1992-11-01
In the paper we characterize a convex function by the monotonicity of a particular variation diminishing spline sequence. The result extends the property known for the Bernstein polynomial sequence. (author). 4 refs
Recent characterizations of generalized convexity in convexity in cooperative game thoery
Energy Technology Data Exchange (ETDEWEB)
Driessen, T.
1994-12-31
The notion of convexity for a real-valued function on the power set of the finite set N (the so-called cooperative game with player set N) is defined as in other mathematical fields. The study of convexity plays an important role within the field of cooperative game theory because the application of the solution part of game theory to convex games provides elegant results for the solution concepts involved. Especially, the well known solution concept called core is, for convex games, very well characterized. The current paper focuses on a notion of generalized convexity, called k- convexity, for cooperative n-person games. Due to very recent characterizations of convexity for cooperative games, the goal is to provide similar new characterizations of k-convexity. The main characterization states that for the k-convexity of an n-person game it is both necessary and sufficient that half of all the so-called marginal worth vectors belong to the core of the game. Here it is taken into account whether a marginal worth vector corresponds to an even or odd ordering of k elements of the n-person player set N. Another characterization of k-convexity is presented in terms of a so-called finite min-modular decomposition. That is, some specific cover game of a k-convex game can be decomposed as the minimum of a finite number of modular (or additive) games. Finally it is established that the k-convexity of a game can be characterized in terms of the second order partial derivates of the so-called multilinear extension of the game.
Hermitian harmonic maps into convex balls
International Nuclear Information System (INIS)
Li Zhenyang; Xi Zhang
2004-07-01
In this paper, we consider Hermitian harmonic maps from Hermitian manifolds into convex balls. We prove that there exist no non-trivial Hermitian harmonic maps from closed Hermitian manifolds into convex balls, and we use the heat flow method to solve the Dirichlet problem for Hermitian harmonic maps when the domain is compact Hermitian manifold with non-empty boundary. The case where the domain manifold is complete(noncompact) is also studied. (author)
Counting convex polygons in planar point sets
Mitchell, J.S.B.; Rote, G.; Sundaram, Gopalakrishnan; Woeginger, G.J.
1995-01-01
Given a set S of n points in the plane, we compute in time O(n3) the total number of convex polygons whose vertices are a subset of S. We give an O(m · n3) algorithm for computing the number of convex k-gons with vertices in S, for all values k = 3,…, m; previously known bounds were exponential
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
Application of charge coupled devices as spatially-resolved detectors for X-ray spectrograph
Energy Technology Data Exchange (ETDEWEB)
Attelan-Langlet, S; Etlicher, B [Ecole Polytechnique, Palaiseau (France); Mishenskij, V O; Papazyan, Yu V; Smirnov, V P; Volkov, G S; Zajtsev, V I [Inst. for Thermonuclear and Innovation Investigations, Troitsk (Russian Federation)
1997-12-31
An X-ray crystal spectrograph which contains a CCD linear array as the position-sensitive detector is described. Radiation detection is performed directly onto CCD. The spectrograph has a limit of sensitivity at about 2 J/(A.ster), spectral resolution about 1000 and dynamic range 100-120. The device operates on-line with IBM-PC based control system. Software provides all data acquisition and treatment. Output spectra are presented in absolute units. The device was used during composite Z-pinch experiments at pulse-power installations ``Angara-5-1`` (TRINITI, Troitsk, Russia) and ``GAEL`` (Ecole Polytechnique, Palaiseau, France). Currently the spectrograph is included in the set of diagnostics of the ``Angara-5-1`` facility. Some of the spectra obtained are presented and discussed. (author). 4 figs., 9 refs.
Entropy Coherent and Entropy Convex Measures of Risk
Laeven, R.J.A.; Stadje, M.A.
2011-01-01
We introduce two subclasses of convex measures of risk, referred to as entropy coherent and entropy convex measures of risk. We prove that convex, entropy convex and entropy coherent measures of risk emerge as certainty equivalents under variational, homothetic and multiple priors preferences,
On Hadamard-Type Inequalities Involving Several Kinds of Convexity
Directory of Open Access Journals (Sweden)
Dragomir SeverS
2010-01-01
Full Text Available We do not only give the extensions of the results given by Gill et al. (1997 for log-convex functions but also obtain some new Hadamard-type inequalities for log-convex -convex, and -convex functions.
Sky Subtraction with Fiber-Fed Spectrograph
Rodrigues, Myriam
2017-09-01
"Historically, fiber-fed spectrographs had been deemed inadequate for the observation of faint targets, mainly because of the difficulty to achieve high accuracy on the sky subtraction. The impossibility to sample the sky in the immediate vicinity of the target in fiber instruments has led to a commonly held view that a multi-object fibre spectrograph cannot achieve an accurate sky subtraction under 1% contrary to their slit counterpart. The next generation of multi-objects spectrograph at the VLT (MOONS) and the planed MOS for the E-ELT (MOSAIC) are fiber-fed instruments, and are aimed to observed targets fainter than the sky continuum level. In this talk, I will present the state-of-art on sky subtraction strategies and data reduction algorithm specifically developed for fiber-fed spectrographs. I will also present the main results of an observational campaign to better characterise the sky spatial and temporal variations ( in particular the continuum and faint sky lines)."
Tomographic extreme-ultraviolet spectrographs: TESS.
Cotton, D M; Stephan, A; Cook, T; Vickers, J; Taylor, V; Chakrabarti, S
2000-08-01
We describe the system of Tomographic Extreme Ultraviolet (EUV) SpectrographS (TESS) that are the primary instruments for the Tomographic Experiment using Radiative Recombinative Ionospheric EUV and Radio Sources (TERRIERS) satellite. The spectrographs were designed to make high-sensitivity {80 counts/s)/Rayleigh [one Rayleigh is equivalent to 10(6) photons/(4pi str cm(2)s)}, line-of-sight measurements of the oi 135.6- and 91.1-nm emissions suitable for tomographic inversion. The system consists of five spectrographs, four identical nightglow instruments (for redundancy and added sensitivity), and one instrument with a smaller aperture to reduce sensitivity and increase spectral resolution for daytime operation. Each instrument has a bandpass of 80-140 nm with approximately 2- and 1-nm resolution for the night and day instruments, respectively. They utilize microchannel-plate-based two-dimensional imaging detectors with wedge-and-strip anode readouts. The instruments were designed, fabricated, and calibrated at Boston University, and the TERRIERS satellite was launched on 18 May 1999 from Vandenberg Air Force Base, California.
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
Yu, Yi; Huang, Yisheng; Zhang, Lizhen; Lin, Zhoubin; Sun, Shijia; Wang, Guofu
2014-07-01
A Nd3+:Na2La4(WO4)7 crystal with dimensions of ϕ 17 × 30 mm3 was grown by the Czochralski method. The thermal expansion coefficients of Nd3+:Na2La4(WO4)7 crystal are 1.32 × 10-5 K-1 along c-axis and 1.23 × 10-5 K-1 along a-axis, respectively. The spectroscopic characteristics of Nd3+:Na2La4(WO4)7 crystal were investigated. The Judd-Ofelt theory was applied to calculate the spectral parameters. The absorption cross sections at 805 nm are 2.17 × 10-20 cm2 with a full width at half maximum (FWHM) of 15 nm for π-polarization, and 2.29 × 10-20 cm2 with a FWHM of 14 nm for σ-polarization. The emission cross sections are 3.19 × 10-20 cm2 for σ-polarization and 2.67 × 10-20 cm2 for π-polarization at 1,064 nm. The fluorescence quantum efficiency is 67 %. The quasi-cw laser of Nd3+:Na2La4(WO4)7 crystal was performed. The maximum output power is 80 mW. The slope efficiency is 7.12 %. The results suggest Nd3+:Na2La4(WO4)7 crystal as a promising laser crystal fit for laser diode pumping.
Reconstruction of convex bodies from surface tensors
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus
. The output of the reconstruction algorithm is a polytope P, where the surface tensors of P and K are identical up to rank s. We establish a stability result based on a generalization of Wirtinger’s inequality that shows that for large s, two convex bodies are close in shape when they have identical surface...... that are translates of each other. An algorithm for reconstructing an unknown convex body in R 2 from its surface tensors up to a certain rank is presented. Using the reconstruction algorithm, the shape of an unknown convex body can be approximated when only a finite number s of surface tensors are available...... tensors up to rank s. This is used to establish consistency of the developed reconstruction algorithm....
Non-convex multi-objective optimization
Pardalos, Panos M; Žilinskas, Julius
2017-01-01
Recent results on non-convex multi-objective optimization problems and methods are presented in this book, with particular attention to expensive black-box objective functions. Multi-objective optimization methods facilitate designers, engineers, and researchers to make decisions on appropriate trade-offs between various conflicting goals. A variety of deterministic and stochastic multi-objective optimization methods are developed in this book. Beginning with basic concepts and a review of non-convex single-objective optimization problems; this book moves on to cover multi-objective branch and bound algorithms, worst-case optimal algorithms (for Lipschitz functions and bi-objective problems), statistical models based algorithms, and probabilistic branch and bound approach. Detailed descriptions of new algorithms for non-convex multi-objective optimization, their theoretical substantiation, and examples for practical applications to the cell formation problem in manufacturing engineering, the process design in...
A generalization of the convex Kakeya problem
Ahn, Heekap
2012-01-01
We consider the following geometric alignment problem: Given a set of line segments in the plane, find a convex region of smallest area that contains a translate of each input segment. This can be seen as a generalization of Kakeya\\'s problem of finding a convex region of smallest area such that a needle can be turned through 360 degrees within this region. Our main result is an optimal Θ(n log n)-time algorithm for our geometric alignment problem, when the input is a set of n line segments. We also show that, if the goal is to minimize the perimeter of the region instead of its area, then the optimum placement is when the midpoints of the segments coincide. Finally, we show that for any compact convex figure G, the smallest enclosing disk of G is a smallest-perimeter region containing a translate of any rotated copy of G. © 2012 Springer-Verlag Berlin Heidelberg.
Reconstruction of convex bodies from surface tensors
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus
We present two algorithms for reconstruction of the shape of convex bodies in the two-dimensional Euclidean space. The first reconstruction algorithm requires knowledge of the exact surface tensors of a convex body up to rank s for some natural number s. The second algorithm uses harmonic intrinsic...... volumes which are certain values of the surface tensors and allows for noisy measurements. From a generalized version of Wirtinger's inequality, we derive stability results that are utilized to ensure consistency of both reconstruction procedures. Consistency of the reconstruction procedure based...
Probing convex polygons with X-rays
International Nuclear Information System (INIS)
Edelsbrunner, H.; Skiena, S.S.
1988-01-01
An X-ray probe through a polygon measures the length of intersection between a line and the polygon. This paper considers the properties of various classes of X-ray probes, and shows how they interact to give finite strategies for completely describing convex n-gons. It is shown that (3n/2)+6 probes are sufficient to verify a specified n-gon, while for determining convex polygons (3n-1)/2 X-ray probes are necessary and 5n+O(1) sufficient, with 3n+O(1) sufficient given that a lower bound on the size of the smallest edge of P is known
Recovering convexity in non-associated plasticity
Francfort, Gilles A.
2018-03-01
We quickly review two main non-associated plasticity models, the Armstrong-Frederick model of nonlinear kinematic hardening and the Drucker-Prager cap model. Non-associativity is commonly thought to preclude any kind of variational formulation, be it in a Hencky-type (static) setting, or when considering a quasi-static evolution because non-associativity destroys convexity. We demonstrate that such an opinion is misguided: associativity (and convexity) can be restored at the expense of the introduction of state variable-dependent dissipation potentials.
Bayoumi, A
2003-01-01
All the existing books in Infinite Dimensional Complex Analysis focus on the problems of locally convex spaces. However, the theory without convexity condition is covered for the first time in this book. This shows that we are really working with a new, important and interesting field. Theory of functions and nonlinear analysis problems are widespread in the mathematical modeling of real world systems in a very broad range of applications. During the past three decades many new results from the author have helped to solve multiextreme problems arising from important situations, non-convex and
General method of quantitative spectrographic analysis
International Nuclear Information System (INIS)
Capdevila, C.; Roca, M.
1966-01-01
A spectrographic method was developed to determine 23 elements in a wide range of concentrations; the method can be applied to metallic or refractory samples. Previous melting with lithium tetraborate and germanium oxide is done in order to avoid the influence of matrix composition and crystalline structure. Germanium oxide is also employed as internal standard. The resulting beads ar mixed with graphite powder (1:1) and excited in a 10 amperes direct current arc. (Author) 12 refs
Differential analysis of matrix convex functions
DEFF Research Database (Denmark)
Hansen, Frank; Tomiyama, Jun
2007-01-01
We analyze matrix convex functions of a fixed order defined in a real interval by differential methods as opposed to the characterization in terms of divided differences given by Kraus [F. Kraus, Über konvekse Matrixfunktionen, Math. Z. 41 (1936) 18-42]. We obtain for each order conditions for ma...
Conference on Convex Analysis and Global Optimization
Pardalos, Panos
2001-01-01
There has been much recent progress in global optimization algo rithms for nonconvex continuous and discrete problems from both a theoretical and a practical perspective. Convex analysis plays a fun damental role in the analysis and development of global optimization algorithms. This is due essentially to the fact that virtually all noncon vex optimization problems can be described using differences of convex functions and differences of convex sets. A conference on Convex Analysis and Global Optimization was held during June 5 -9, 2000 at Pythagorion, Samos, Greece. The conference was honoring the memory of C. Caratheodory (1873-1950) and was en dorsed by the Mathematical Programming Society (MPS) and by the Society for Industrial and Applied Mathematics (SIAM) Activity Group in Optimization. The conference was sponsored by the European Union (through the EPEAEK program), the Department of Mathematics of the Aegean University and the Center for Applied Optimization of the University of Florida, by th...
Robust Utility Maximization Under Convex Portfolio Constraints
International Nuclear Information System (INIS)
Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed
2015-01-01
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle
Localized Multiple Kernel Learning A Convex Approach
2016-11-22
data. All the aforementioned approaches to localized MKL are formulated in terms of non-convex optimization problems, and deep the- oretical...learning. IEEE Transactions on Neural Networks, 22(3):433–446, 2011. Jingjing Yang, Yuanning Li, Yonghong Tian, Lingyu Duan, and Wen Gao. Group-sensitive
A generalization of the convex Kakeya problem
Ahn, Heekap; Bae, Sangwon; Cheong, Otfried; Gudmundsson, Joachim; Tokuyama, Takeshi; Vigneron, Antoine E.
2013-01-01
segments. We also show that, if the goal is to minimize the perimeter of the region instead of its area, then placing the segments with their midpoint at the origin and taking their convex hull results in an optimal solution. Finally, we show that for any
Minimizing convex functions by continuous descent methods
Directory of Open Access Journals (Sweden)
Sergiu Aizicovici
2010-01-01
Full Text Available We study continuous descent methods for minimizing convex functions, defined on general Banach spaces, which are associated with an appropriate complete metric space of vector fields. We show that there exists an everywhere dense open set in this space of vector fields such that each of its elements generates strongly convergent trajectories.
Directional Convexity and Finite Optimality Conditions.
1984-03-01
system, Necessary Conditions for optimality. Work Unit Number 5 (Optimization and Large Scale Systems) *Istituto di Matematica Applicata, Universita...that R(T) is convex would then imply x(u,T) e int R(T). Cletituto di Matematica Applicata, Universita di Padova, 35100 ITALY. Sponsored by the United
Convexity properties of Hamiltonian group actions
Guillemin, Victor
2005-01-01
This is a monograph on convexity properties of moment mappings in symplectic geometry. The fundamental result in this subject is the Kirwan convexity theorem, which describes the image of a moment map in terms of linear inequalities. This theorem bears a close relationship to perplexing old puzzles from linear algebra, such as the Horn problem on sums of Hermitian matrices, on which considerable progress has been made in recent years following a breakthrough by Klyachko. The book presents a simple local model for the moment polytope, valid in the "generic&rdquo case, and an elementary Morse-theoretic argument deriving the Klyachko inequalities and some of their generalizations. It reviews various infinite-dimensional manifestations of moment convexity, such as the Kostant type theorems for orbits of a loop group (due to Atiyah and Pressley) or a symplectomorphism group (due to Bloch, Flaschka and Ratiu). Finally, it gives an account of a new convexity theorem for moment map images of orbits of a Borel sub...
Some Characterizations of Convex Interval Games
Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.
2008-01-01
This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.
A generalization of the convex Kakeya problem
Ahn, Heekap; Bae, Sangwon; Cheong, Otfried; Gudmundsson, Joachim; Tokuyama, Takeshi; Vigneron, Antoine E.
2012-01-01
We consider the following geometric alignment problem: Given a set of line segments in the plane, find a convex region of smallest area that contains a translate of each input segment. This can be seen as a generalization of Kakeya's problem
A generalization of the convex Kakeya problem
Ahn, Heekap
2013-09-19
Given a set of line segments in the plane, not necessarily finite, what is a convex region of smallest area that contains a translate of each input segment? This question can be seen as a generalization of Kakeya\\'s problem of finding a convex region of smallest area such that a needle can be rotated through 360 degrees within this region. We show that there is always an optimal region that is a triangle, and we give an optimal Θ(nlogn)-time algorithm to compute such a triangle for a given set of n segments. We also show that, if the goal is to minimize the perimeter of the region instead of its area, then placing the segments with their midpoint at the origin and taking their convex hull results in an optimal solution. Finally, we show that for any compact convex figure G, the smallest enclosing disk of G is a smallest-perimeter region containing a translate of every rotated copy of G. © 2013 Springer Science+Business Media New York.
Dynamic Matchings in Convex Bipartite Graphs
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Georgiadis, Loukas; Hansen, Kristoffer Arnsfelt
2007-01-01
We consider the problem of maintaining a maximum matching in a convex bipartite graph G = (V,E) under a set of update operations which includes insertions and deletions of vertices and edges. It is not hard to show that it is impossible to maintain an explicit representation of a maximum matching...
Cost Allocation and Convex Data Envelopment
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Tind, Jørgen
such as Data Envelopment Analysis (DEA). The convexity constraint of the BCC model introduces a non-zero slack in the objective function of the multiplier problem and we show that the cost allocation rules discussed in this paper can be used as candidates to allocate this slack value on to the input (or output...
Tropicalized Lambda Lengths, Measured Laminations and Convexity
DEFF Research Database (Denmark)
C. Penner, R.
This work uncovers the tropical analogue for measured laminations of the convex hull construction of decorated Teichmueller theory, namely, it is a study in coordinates of geometric degeneration to a point of Thurston's boundary for Teichmueller space. This may offer a paradigm for the extension ...
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika; Amato, Nancy M.; Lu, Yanyan; Lien, Jyh-Ming
2013-01-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika
2013-02-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
Schur Convexity of Generalized Heronian Means Involving Two Parameters
Directory of Open Access Journals (Sweden)
Bencze Mihály
2008-01-01
Full Text Available Abstract The Schur convexity and Schur-geometric convexity of generalized Heronian means involving two parameters are studied, the main result is then used to obtain several interesting and significantly inequalities for generalized Heronian means.
A STRONG OPTIMIZATION THEOREM IN LOCALLY CONVEX SPACES
Institute of Scientific and Technical Information of China (English)
程立新; 腾岩梅
2003-01-01
This paper presents a geometric characterization of convex sets in locally convex spaces onwhich a strong optimization theorem of the Stegall-type holds, and gives Collier's theorem ofw* Asplund spaces a localized setting.
Displacement Convexity for First-Order Mean-Field Games
Seneci, Tommaso
2018-01-01
Finally, we identify a large class of functions, that depend on solutions of MFGs, which are convex in time. Among these, we find several norms. This convexity gives bounds for the density of solutions of the planning problem.
Convex stoma appliances: an audit of stoma care nurses.
Perrin, Angie
2016-12-08
This article observes the complexities surrounding the use of convex appliances within the specialist sphere of stoma care. It highlights some of the results taken from a small audit carried out with 24 stoma care nurses examining the general use of convex appliances and how usage of convex products has evolved, along with specialist stoma care practice.
Convexity, gauge-dependence and tunneling rates
Energy Technology Data Exchange (ETDEWEB)
Plascencia, Alexis D.; Tamarit, Carlos [Institute for Particle Physics Phenomenology, Durham University,South Road, DH1 3LE (United Kingdom)
2016-10-19
We clarify issues of convexity, gauge-dependence and radiative corrections in relation to tunneling rates. Despite the gauge dependence of the effective action at zero and finite temperature, it is shown that tunneling and nucleation rates remain independent of the choice of gauge-fixing. Taking as a starting point the functional that defines the transition amplitude from a false vacuum onto itself, it is shown that decay rates are exactly determined by a non-convex, false vacuum effective action evaluated at an extremum. The latter can be viewed as a generalized bounce configuration, and gauge-independence follows from the appropriate Nielsen identities. This holds for any election of gauge-fixing that leads to an invertible Faddeev-Popov matrix.
Reconstruction of convex bodies from surface tensors
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus
2016-01-01
We present two algorithms for reconstruction of the shape of convex bodies in the two-dimensional Euclidean space. The first reconstruction algorithm requires knowledge of the exact surface tensors of a convex body up to rank s for some natural number s. When only measurements subject to noise...... of surface tensors are available for reconstruction, we recommend to use certain values of the surface tensors, namely harmonic intrinsic volumes instead of the surface tensors evaluated at the standard basis. The second algorithm we present is based on harmonic intrinsic volumes and allows for noisy...... measurements. From a generalized version of Wirtinger's inequality, we derive stability results that are utilized to ensure consistency of both reconstruction procedures. Consistency of the reconstruction procedure based on measurements subject to noise is established under certain assumptions on the noise...
Exact generating function for 2-convex polygons
International Nuclear Information System (INIS)
James, W R G; Jensen, I; Guttmann, A J
2008-01-01
Polygons are described as almost-convex if their perimeter differs from the perimeter of their minimum bounding rectangle by twice their 'concavity index', m. Such polygons are called m-convex polygons and are characterized by having up to m indentations in their perimeter. We first describe how we conjectured the (isotropic) generating function for the case m = 2 using a numerical procedure based on series expansions. We then proceed to prove this result for the more general case of the full anisotropic generating function, in which steps in the x and y directions are distinguished. In doing so, we develop tools that would allow for the case m > 2 to be studied
Solving ptychography with a convex relaxation
Horstmeyer, Roarke; Chen, Richard Y.; Ou, Xiaoze; Ames, Brendan; Tropp, Joel A.; Yang, Changhuei
2015-05-01
Ptychography is a powerful computational imaging technique that transforms a collection of low-resolution images into a high-resolution sample reconstruction. Unfortunately, algorithms that currently solve this reconstruction problem lack stability, robustness, and theoretical guarantees. Recently, convex optimization algorithms have improved the accuracy and reliability of several related reconstruction efforts. This paper proposes a convex formulation of the ptychography problem. This formulation has no local minima, it can be solved using a wide range of algorithms, it can incorporate appropriate noise models, and it can include multiple a priori constraints. The paper considers a specific algorithm, based on low-rank factorization, whose runtime and memory usage are near-linear in the size of the output image. Experiments demonstrate that this approach offers a 25% lower background variance on average than alternating projections, the ptychographic reconstruction algorithm that is currently in widespread use.
Convex nonnegative matrix factorization with manifold regularization.
Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong
2015-03-01
Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Convexity, gauge-dependence and tunneling rates
International Nuclear Information System (INIS)
Plascencia, Alexis D.; Tamarit, Carlos
2016-01-01
We clarify issues of convexity, gauge-dependence and radiative corrections in relation to tunneling rates. Despite the gauge dependence of the effective action at zero and finite temperature, it is shown that tunneling and nucleation rates remain independent of the choice of gauge-fixing. Taking as a starting point the functional that defines the transition amplitude from a false vacuum onto itself, it is shown that decay rates are exactly determined by a non-convex, false vacuum effective action evaluated at an extremum. The latter can be viewed as a generalized bounce configuration, and gauge-independence follows from the appropriate Nielsen identities. This holds for any election of gauge-fixing that leads to an invertible Faddeev-Popov matrix.
An easy path to convex analysis and applications
Mordukhovich, Boris S
2013-01-01
Convex optimization has an increasing impact on many areas of mathematics, applied sciences, and practical applications. It is now being taught at many universities and being used by researchers of different fields. As convex analysis is the mathematical foundation for convex optimization, having deep knowledge of convex analysis helps students and researchers apply its tools more effectively. The main goal of this book is to provide an easy access to the most fundamental parts of convex analysis and its applications to optimization. Modern techniques of variational analysis are employed to cl
Convex geometry of quantum resource quantification
Regula, Bartosz
2018-01-01
We introduce a framework unifying the mathematical characterisation of different measures of general quantum resources and allowing for a systematic way to define a variety of faithful quantifiers for any given convex quantum resource theory. The approach allows us to describe many commonly used measures such as matrix norm-based quantifiers, robustness measures, convex roof-based measures, and witness-based quantifiers together in a common formalism based on the convex geometry of the underlying sets of resource-free states. We establish easily verifiable criteria for a measure to possess desirable properties such as faithfulness and strong monotonicity under relevant free operations, and show that many quantifiers obtained in this framework indeed satisfy them for any considered quantum resource. We derive various bounds and relations between the measures, generalising and providing significantly simplified proofs of results found in the resource theories of quantum entanglement and coherence. We also prove that the quantification of resources in this framework simplifies for pure states, allowing us to obtain more easily computable forms of the considered measures, and show that many of them are in fact equal on pure states. Further, we investigate the dual formulation of resource quantifiers, which provide a characterisation of the sets of resource witnesses. We present an explicit application of the results to the resource theories of multi-level coherence, entanglement of Schmidt number k, multipartite entanglement, as well as magic states, providing insight into the quantification of the four resources by establishing novel quantitative relations and introducing new quantifiers, such as a measure of entanglement of Schmidt number k which generalises the convex roof-extended negativity, a measure of k-coherence which generalises the \
On the convexity of relativistic hydrodynamics
International Nuclear Information System (INIS)
Ibáñez, José M; Martí, José M; Cordero-Carrión, Isabel; Miralles, Juan A
2013-01-01
The relativistic hydrodynamic system of equations for a perfect fluid obeying a causal equation of state is hyperbolic (Anile 1989 Relativistic Fluids and Magneto-Fluids (Cambridge: Cambridge University Press)). In this report, we derive the conditions for this system to be convex in terms of the fundamental derivative of the equation of state (Menikoff and Plohr1989 Rev. Mod. Phys. 61 75). The classical limit is recovered. Communicated by L Rezzolla (note)
Dynamic Convex Duality in Constrained Utility Maximization
Li, Yusong; Zheng, Harry
2016-01-01
In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...
Optimal skill distribution under convex skill costs
Directory of Open Access Journals (Sweden)
Tin Cheuk Leung
2018-03-01
Full Text Available This paper studies optimal distribution of skills in an optimal income tax framework with convex skill constraints. The problem is cast as a social planning problem where a redistributive planner chooses how to distribute a given amount of aggregate skills across people. We find that optimal skill distribution is either perfectly equal or perfectly unequal, but an interior level of skill inequality is never optimal.
Exact optics - III. Schwarzschild's spectrograph camera revised
Willstrop, R. V.
2004-03-01
Karl Schwarzschild identified a system of two mirrors, each defined by conic sections, free of third-order spherical aberration, coma and astigmatism, and with a flat focal surface. He considered it impractical, because the field was too restricted. This system was rediscovered as a quadratic approximation to one of Lynden-Bell's `exact optics' designs which have wider fields. Thus the `exact optics' version has a moderate but useful field, with excellent definition, suitable for a spectrograph camera. The mirrors are strongly aspheric in both the Schwarzschild design and the exact optics version.
Spectrographic analysis of uranium-molybdenum alloys
International Nuclear Information System (INIS)
Roca, M.
1967-01-01
A spectrographic method of analysis has been developed for uranium-molybdenum alloys containing up to 10 % Mo. The carrier distillation technique, with gallium oxide and graphite as carriers, is used for the semiquantitative determination of Al, Cr, Fe, Ni and Si, involving the conversion of the samples into oxides. As a consequence of the study of the influence of the molybdenum on the line intensities, it is useful to prepare only one set of standards with 0,6 % MoO 3 . Total burning excitation is used for calcium, employing two sets of standards with 0,6 and 7.5 MoO 3 . (Author) 5 refs
The occipital lobe convexity sulci and gyri.
Alves, Raphael V; Ribas, Guilherme C; Párraga, Richard G; de Oliveira, Evandro
2012-05-01
The anatomy of the occipital lobe convexity is so intricate and variable that its precise description is not found in the classic anatomy textbooks, and the occipital sulci and gyri are described with different nomenclatures according to different authors. The aim of this study was to investigate and describe the anatomy of the occipital lobe convexity and clarify its nomenclature. The configurations of sulci and gyri on the lateral surface of the occipital lobe of 20 cerebral hemispheres were examined in order to identify the most characteristic and consistent patterns. The most characteristic and consistent occipital sulci identified in this study were the intraoccipital, transverse occipital, and lateral occipital sulci. The morphology of the transverse occipital sulcus and the intraoccipital sulcus connection was identified as the most important aspect to define the gyral pattern of the occipital lobe convexity. Knowledge of the main features of the occipital sulci and gyri permits the recognition of a basic configuration of the occipital lobe and the identification of its sulcal and gyral variations.
Convex Hull Aided Registration Method (CHARM).
Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian
2017-09-01
Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.
Generalized vector calculus on convex domain
Agrawal, Om P.; Xu, Yufeng
2015-06-01
In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.
Convexities move because they contain matter.
Barenholtz, Elan
2010-09-22
Figure-ground assignment to a contour is a fundamental stage in visual processing. The current paper introduces a novel, highly general dynamic cue to figure-ground assignment: "Convex Motion." Across six experiments, subjects showed a strong preference to assign figure and ground to a dynamically deforming contour such that the moving contour segment was convex rather than concave. Experiments 1 and 2 established the preference across two different kinds of deformational motion. Additional experiments determined that this preference was not due to fixation (Experiment 3) or attentional mechanisms (Experiment 4). Experiment 5 found a similar, but reduced bias for rigid-as opposed to deformational-motion, and Experiment 6 demonstrated that the phenomenon depends on the global motion of the effected contour. An explanation of this phenomenon is presented on the basis of typical natural deformational motion, which tends to involve convex contour projections that contain regions consisting of physical "matter," as opposed to concave contour indentations that contain empty space. These results highlight the fundamental relationship between figure and ground, perceived shape, and the inferred physical properties of an object.
Micro photometer's automation for quantitative spectrograph analysis
International Nuclear Information System (INIS)
Gutierrez E, C.Y.A.
1996-01-01
A Microphotometer is used to increase the sharpness of dark spectral lines. Analyzing these lines one sample content and its concentration could be determined and the analysis is known as Quantitative Spectrographic Analysis. The Quantitative Spectrographic Analysis is carried out in 3 steps, as follows. 1. Emulsion calibration. This consists of gauging a photographic emulsion, to determine the intensity variations in terms of the incident radiation. For the procedure of emulsion calibration an adjustment with square minimum to the data obtained is applied to obtain a graph. It is possible to determine the density of dark spectral line against the incident light intensity shown by the microphotometer. 2. Working curves. The values of known concentration of an element against incident light intensity are plotted. Since the sample contains several elements, it is necessary to find a work curve for each one of them. 3. Analytical results. The calibration curve and working curves are compared and the concentration of the studied element is determined. The automatic data acquisition, calculation and obtaining of resulting, is done by means of a computer (PC) and a computer program. The conditioning signal circuits have the function of delivering TTL levels (Transistor Transistor Logic) to make the communication between the microphotometer and the computer possible. Data calculation is done using a computer programm
Field Raman Spectrograph for Environmental Analysis
International Nuclear Information System (INIS)
Sylvia, J.M.; Haas, J.W.; Spencer, K.M.; Carrabba, M.M.; Rauh, R.D.; Forney, R.W.; Johnston, T.M.
1998-01-01
The widespread contamination found across the US Department of Energy (DOE) complex has received considerable attention from the government and public alike. A massive site characterization and cleanup effort has been underway for several years and is expected to continue for several decades more. The scope of the cleanup effort ranges from soil excavation and treatment to complete dismantling and decontamination of whole buildings. To its credit, DOE has supported research and development of new technologies to speed up and reduce the cost of this effort. One area in particular has been the development of portable instrumentation that can be used to perform analytical measurements in the field. This approach provides timely data to decision makers and eliminates the expense, delays, and uncertainties of sample preservation, transport, storage, and laboratory analysis. In this program, we have developed and demonstrated in the field a transportable, high performance Raman spectrograph that can be used to detect and identify contaminants in a variety of scenarios. With no moving parts, the spectrograph is rugged and can perform many Raman measurements in situ with flexible fiber optic sampling probes. The instrument operates under computer control and a software package has been developed to collect and process spectral data. A collection of Raman spectra for 200 contaminants of DOE importance has been compiled in a searchable format to assist in the identification of unknown contaminants in the field
The deterministic optical alignment of the HERMES spectrograph
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
LRS2: A New Integral Field Spectrograph for the HET
Tuttle, Sarah E.; Hill, Gary J.; Chonis, Taylor S.; Tonnesen, Stephanie
2016-01-01
Here we present LRS2 (Low Resolution Spectrograph) and highlight early science opportunities with the newly upgraded Hobby Eberly telescope (HET). LRS2 is a four-channel optical wavelength (370nm - 1micron) spectrograph based on two VIRUS unit spectrographs. This fiber-fed integral field spectrograph covers a 12" x 6" field of view, switched between the two units (one blue, and one red) at R~2000. We highlight design elements, including the fundamental modification to grisms (from VPH gratings in VIRUS) to access the higher resolution. We discuss early science opportunities, including investigating nearby "blue-bulge" spiral galaxies and their anomalous star formation distribution.
Convex and Radially Concave Contoured Distributions
Directory of Open Access Journals (Sweden)
Wolf-Dieter Richter
2015-01-01
Full Text Available Integral representations of the locally defined star-generalized surface content measures on star spheres are derived for boundary spheres of balls being convex or radially concave with respect to a fan in Rn. As a result, the general geometric measure representation of star-shaped probability distributions and the general stochastic representation of the corresponding random vectors allow additional specific interpretations in the two mentioned cases. Applications to estimating and testing hypotheses on scaling parameters are presented, and two-dimensional sample clouds are simulated.
On conditional independence and log-convexity
Czech Academy of Sciences Publication Activity Database
Matúš, František
2012-01-01
Roč. 48, č. 4 (2012), s. 1137-1147 ISSN 0246-0203 R&D Projects: GA AV ČR IAA100750603; GA ČR GA201/08/0539 Institutional support: RVO:67985556 Keywords : Conditional independence * Markov properties * factorizable distributions * graphical Markov models * log-convexity * Gibbs- Markov equivalence * Markov fields * Gaussian distributions * positive definite matrices * covariance selection model Subject RIV: BA - General Mathematics Impact factor: 0.933, year: 2012 http://library.utia.cas.cz/separaty/2013/MTR/matus-0386229.pdf
Spectrographic determination of chlorine and fluorine
International Nuclear Information System (INIS)
Contamin, G.
1965-04-01
Experimental conditions have been investigated in order to obtain the highest sensitivity in spectrographic determination of chlorine and fluorine using the Fassel method of excitation in an inert atmosphere. The influence of the nature of the atmosphere, of the discharge conditions and of the matrix material has been investigated. The following results have been established: 1. chlorine determination is definitely possible: a working curve has been drawn between 10 μg and 100 μg, the detection limit being around 5 μg; 2. fluorine determination is not satisfactory: the detection limit is still of the order of 80 μg. The best operating conditions have been defined for both elements. (author) [fr
Spectrographic determination of impurities in magnesium metal
International Nuclear Information System (INIS)
Capdevila, C.; Diaz-Guerra, J. P.
1979-01-01
The spectrographic determination of trace quantities of Al, B, Cd, Co, Cr, Cu, Fe, Li, Hn, Mo, Ni and Si in magnesium metal is described. Samples are dissolved with HNO 3 and calcinate into MgO. In order to avoid losses of boron NH 4 OH is added to the nitric solution. Except for aluminium and chromium the analysis is performed through the use of the carrier distillation technique. These two impurities are determined by burning to completion the MgO. Among the compounds studied as carriers (AgCl, AgF, CsCl, CuF 2 , KCl and SrF 2 ) AgCl allows, In general, the best volatilization efficiency. Lithium determination is achieved by using KC1 or CsCl. Detection limits, on the basis of MgO, are in the range 0,1 to 30 ppm, depending on the element. (Author) 8 refs
Quantitative spectrographic analysis of impurities in antimonium
International Nuclear Information System (INIS)
Brito, J. de; Gomes, R.P.
1978-01-01
An emission spectrographic method is describe for the determination of Ag, Al, As, Be, Bi, Cd, Cr, Cu, Ga, Ni, Pb, Sn, Si, and Zn in high purity antimony metal. The metal sample ia dissolved in nitric acid(1:1) and converted tp oxide by calcination at 900 0 C for one hour. The oxide so obtained is mixed with graphite, which is used as a spectroscopic buffer, and excited by a direct current arc. Many parameters are studied optimum conditions are selected for the determination of the impurities mentioned. The spectrum is photographed in the second order of a 15.000 lines per inch grating and the most sensitive lines for the elements are selected. The impurities are determined in the concentration range of 1 - 0,01% with a precision of approximately 10% [pt
Spectrographic determination of impurities in beryllium oxide
International Nuclear Information System (INIS)
Paula Reino, L.C. de; Lordello, A.R.; Pereira, A.S.A.
1986-03-01
A method for the spectrographic determination of Al, B, Cd, Co, Cu, Cr, Fe, Mg, NaNi, Si and Zn in nuclear grade beryllium oxide has been developed. The determination of Co, Al, Na and Zn is besed upon a carrier distillation technique. Better results were obtained with 2% Ga 2 O 3 as carrier in beryllium oxide. For the elements B, Cd, Cu, Fe, Cr, Mg, Ni and Si the sample is loaded in a Scribner-Mullin shallow cup electrode, covered with graphite powder and excited in DC arc. The relative standard deviation values for different elements are in the range of 10 to 20%. The method fulfills requirements of precision and sensitivity for specification analysis of nuclear grade beryllium oxide.(Author) [pt
Field Raman spectrograph for environmental analysis
International Nuclear Information System (INIS)
Haas, J.W. III; Forney, R.W.; Carrabba, M.M.; Rauh, R.D.
1995-01-01
The enormous cost for chemical analysis at DOE facilities predicates that cost-saving measures be implemented. Many approaches, ranging from increasing laboratory sample throughput by reducing preparation time to the development of field instrumentation, are being explored to meet this need. Because of the presence of radioactive materials at many DOE sites, there is also a need for methods that are safer for site personnel and analysts. This project entails the development of a compact Raman spectrograph for field screening and monitoring of a wide variety of wastes, pollutants, and corrosion products in storage tanks, soils, and ground and surface waters. Analytical advantages of the Raman technique include its ability to produce a unique, spectral fingerprint for each contaminant and its ability to analyze both solids and liquids directly, without the need for isolation or cleanup
Fiber Scrambling for High Precision Spectrographs
Kaplan, Zachary; Spronck, J. F. P.; Fischer, D.
2011-05-01
The detection of Earth-like exoplanets with the radial velocity method requires extreme Doppler precision and long-term stability in order to measure tiny reflex velocities in the host star. Recent planet searches have led to the detection of so called "super-Earths” (up to a few Earth masses) that induce radial velocity changes of about 1 m/s. However, the detection of true Earth analogs requires a precision of 10 cm/s. One of the largest factors limiting Doppler precision is variation in the Point Spread Function (PSF) from observation to observation due to changes in the illumination of the slit and spectrograph optics. Thus, this stability has become a focus of current instrumentation work. Fiber optics have been used since the 1980's to couple telescopes to high-precision spectrographs, initially for simpler mechanical design and control. However, fiber optics are also naturally efficient scramblers. Scrambling refers to a fiber's ability to produce an output beam independent of input. Our research is focused on characterizing the scrambling properties of several types of fibers, including circular, square and octagonal fibers. By measuring the intensity distribution after the fiber as a function of input beam position, we can simulate guiding errors that occur at an observatory. Through this, we can determine which fibers produce the most uniform outputs for the severest guiding errors, improving the PSF and allowing sub-m/s precision. However, extensive testing of fibers of supposedly identical core diameter, length and shape from the same manufacturer has revealed the "personality” of individual fibers. Personality describes differing intensity patterns for supposedly duplicate fibers illuminated identically. Here, we present our results on scrambling characterization as a function of fiber type, while studying individual fiber personality.
Spectrographic analysis of thorium and its compounds
International Nuclear Information System (INIS)
Grampurohit, S.V.; Saksena, M.D.; Kaimal, V.N.P.; Kapoor, S.K.; Murty, P.S.
1980-01-01
A spectrographic method, which employs the principle of carrier-distillation technique, is described for the analysis of high purity thoria. Two carriers, AgCl and NaF were used in determining 27 trace elements in ThO 2 . The elements were divided into three groups, A, B and C. In group A, 15 elements, viz. Al, B, Be, Cd, Co, Cr, Cu, Fe, Mg, Mn, Ni, Pb, Sb, Si and Sn were included since it was possible to choose sensitive lines of these elements in one spectral region, 220 - 285 nm. Group B covered 8 elements, viz. Ag, Bi, Ca, Ga, Mo, Ti, V and Zn, which could be determined in the spectral region 290 - 352.5 nm. Group C consisted 4 elements, viz. Ba, K, Li and Na which could be determined in the spectral region 440 - 820 nm. 5% AgCl was used as the carrier for the determination of groups A and C elements and 4% NaF was used as the carrier for the estimation of group B elements. One hundred milligrammes of the sample (in the form of ThO 2 ) containing the carrier were taken in a carrier-distillation electrode and excited in a d.c. arc (10 amps for groups A and C; 15 amps for group B). The spectra of sample and synthetic standards were photographed on Hilger's large quartz, JACO 3.4 m Ebert plane grating and Higler's large glass spectrographs respectively for determining group A, B and C elements. The detection limit obtained for B and Cd was 0.1 ppm. Thorium metal and thorium nitrate samples were converted to ThO 2 prior to analysis. (auth.)
First light results from the Hermes spectrograph at the AAT
Sheinis, A.; Barden, S.; Birchall, M.; Carollo, D.; Bland-Hawthorn, J.; Brzeski, J.; Case, S.; Cannon, R.; Churilov, V.; Couch, W.; Dean, R.; De Silva, G.; D'Orazi, V.; Farrell, T.; Fiegert, K.; Freeman, K.; Frost, G.; Gers, L.; Goodwin, M.; Gray, D.; Heald, R.; Heijmans, J.A.C.; Jones, D.; Keller, S.; Klauser, U.; Kondrat, Y.; Lawrence, J.; Lee, S.; Mali, S.; Martell, S.; Mathews, D.; Mayfield, D.; Miziarski, S.; Muller, R.; Pai, N.; Patterson, R.; Penny, E.; Orr, D.; Shortridge, K.; Simpson, J.; Smedley, S.; Smith, G.; Stafford, D.; Staszak, N.; Vuong, M.; Waller, L.; Wylie de Boer, E.; Xavier, P.; Zheng, J.; Zhelem, R.; Zucker, D.
2014-01-01
The High Efficiency and Resolution Multi Element Spectrograph, HERMES is an facility-class optical spectrograph for the AAT. It is designed primarily for Galactic Archeology [21], the first major attempt to create a detailed understanding of galaxy formation and evolution by studying the history of
Quantitative imaging through a spectrograph. 1. Principles and theory.
Tolboom, R.A.L.; Dam, N.J.; Meulen, J.J. ter; Mooij, J.M.; Maassen, J.D.M.
2004-01-01
Laser-based optical diagnostics, such as planar laser-induced fluorescence and, especially, Raman imaging, often require selective spectral filtering. We advocate the use of an imaging spectrograph with a broad entrance slit as a spectral filter for two-dimensional imaging. A spectrograph in this
Convex functions and optimization methods on Riemannian manifolds
Udrişte, Constantin
1994-01-01
This unique monograph discusses the interaction between Riemannian geometry, convex programming, numerical analysis, dynamical systems and mathematical modelling. The book is the first account of the development of this subject as it emerged at the beginning of the 'seventies. A unified theory of convexity of functions, dynamical systems and optimization methods on Riemannian manifolds is also presented. Topics covered include geodesics and completeness of Riemannian manifolds, variations of the p-energy of a curve and Jacobi fields, convex programs on Riemannian manifolds, geometrical constructions of convex functions, flows and energies, applications of convexity, descent algorithms on Riemannian manifolds, TC and TP programs for calculations and plots, all allowing the user to explore and experiment interactively with real life problems in the language of Riemannian geometry. An appendix is devoted to convexity and completeness in Finsler manifolds. For students and researchers in such diverse fields as pu...
Magnetic spectrograph for the Holifield heavy ion research facility
International Nuclear Information System (INIS)
Ford, J.L.C. Jr.; Enge, H.A.; Erskine, J.R.; Hendrie, D.L.; LeVine, M.J.
1977-01-01
The need for a new generation magnetic spectrograph for the Holifield Heavy Ion Research Facility is discussed. The advantages of a magnetic spectrograph for heavy ion research are discussed, as well as some of the types of experiments for which such an instrument is suited. The limitations which the quality of the incident beam, target and spectrograph itself impose on high resolution heavy ion measurements are discussed. Desired features of an ideal new spectrograph are: (1) intrinsic resolving power E/ΔE greater than or equal to 3000; (2) maximum solid angle greater than or equal to 20 msr; (3) dispersion approx. 4-8m; (4) maximum energy interval approx. 30%; and (5) mass-energy product greater than or equal to 200. Various existing and proposed spectrographs are compared with the specifications for a new heavy ion magnet design
CVXPY: A Python-Embedded Modeling Language for Convex Optimization
Diamond, Steven; Boyd, Stephen
2016-01-01
CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.
CVXPY: A Python-Embedded Modeling Language for Convex Optimization.
Diamond, Steven; Boyd, Stephen
2016-04-01
CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.
Convex blind image deconvolution with inverse filtering
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Effective potential for non-convex potentials
International Nuclear Information System (INIS)
Fujimoto, Y.; O'Raifeartaigh, L.; Parravicini, G.
1983-01-01
It is shown that the well-known relationship between the effective potential GAMMA and the vacuum graphs μ of scalar QFT follows directly from the translational invariance of the measure, and that it holds for all values of the fields phi if, and only if, the classical potential is convex. In the non-convex case μ appears to become complex for some values of phi, but it is shown that the complexity is only apparent and is due to the failure of the loop expansion. The effective potential actually remains real and well-defined for all phi, and reduces to μ in the neighbourhood of the classical minima. A number of examples are considered, notably potentials which are spontaneously broken. In particular the mechanism by which a spontaneous breakdown may be generated by radiative corrections is re-investigated and some new insights obtained. Finally, it is shown that the renormalization group equations for the parameters may be obtained by inspection from the effective potential, and among the examples considered are SU(n) fields and supermultiplets. In particular, it is shown that for supermultiplets the effective potential is not only real but positive. (orig.)
INdAM Workshop on Analytic Aspects of Convexity
Colesanti, Andrea; Gronchi, Paolo
2018-01-01
This book presents the proceedings of the international conference Analytic Aspects in Convexity, which was held in Rome in October 2016. It offers a collection of selected articles, written by some of the world’s leading experts in the field of Convex Geometry, on recent developments in this area: theory of valuations; geometric inequalities; affine geometry; and curvature measures. The book will be of interest to a broad readership, from those involved in Convex Geometry, to those focusing on Functional Analysis, Harmonic Analysis, Differential Geometry, or PDEs. The book is a addressed to PhD students and researchers, interested in Convex Geometry and its links to analysis.
Multi-Period Trading via Convex Optimization
DEFF Research Database (Denmark)
Boyd, Stephen; Busseti, Enzo; Diamond, Steve
2017-01-01
We consider a basic model of multi-period trading, which can be used to evaluate the performance of a trading strategy. We describe a framework for single-period optimization, where the trades in each period are found by solving a convex optimization problem that trades oﬀ expected return, risk......, transaction cost and holding cost such as the borrowing cost for shorting assets. We then describe a multi-period version of the trading method, where optimization is used to plan a sequence of trades, with only the ﬁrst one executed, using estimates of future quantities that are unknown when the trades....... In this paper, we do not address a critical component in a trading algorithm, the predictions or forecasts of future quantities. The methods we describe in this paper can be thought of as good ways to exploit predictions, no matter how they are made. We have also developed a companion open-source software...
The Oxford SWIFT integral field spectrograph
Thatte, Niranjan; Tecza, Matthias; Clarke, Fraser; Goodsall, Timothy; Lynn, James; Freeman, David; Davies, Roger L.
2006-06-01
We present the design of the Oxford SWIFT integral field spectrograph, a dedicated I and z band instrument (0.65μm micron - 1.0μm micron at R~4000), designed to be used in conjunction with the Palomar laser guide star adaptive optics system (PALAO, and its planned upgrade PALM-3000). It builds on two recent developments (i) the improved ability of second generation adaptive optics systems to correct for atmospheric turbulence at wavelengths less than or equal to 1μm micron, and (ii) the availability of CCD array detectors with high quantum efficiency at very red wavelengths (close to the silicon band edge). Combining these with a state-of-the-art integral field unit design using an all-glass image slicer, SWIFT's design provides very high throughput and low scattered light. SWIFT simultaneously provides spectra of ~4000 spatial elements, arranged in a rectangular field-of-view of 44 × 89 pixels. It has three on-the-fly selectable pixel scales of 0.24", 0.16" and 0.08'. First light is expected in spring 2008.
Conditionally exponential convex functions on locally compact groups
International Nuclear Information System (INIS)
Okb El-Bab, A.S.
1992-09-01
The main results of the thesis are: 1) The construction of a compact base for the convex cone of all conditionally exponential convex functions. 2) The determination of the extreme parts of this cone. Some supplementary lemmas are proved for this purpose. (author). 8 refs
Approximate convex hull of affine iterated function system attractors
International Nuclear Information System (INIS)
Mishkinis, Anton; Gentil, Christian; Lanquetin, Sandrine; Sokolov, Dmitry
2012-01-01
Highlights: ► We present an iterative algorithm to approximate affine IFS attractor convex hull. ► Elimination of the interior points significantly reduces the complexity. ► To optimize calculations, we merge the convex hull images at each iteration. ► Approximation by ellipses increases speed of convergence to the exact convex hull. ► We present a method of the output convex hull simplification. - Abstract: In this paper, we present an algorithm to construct an approximate convex hull of the attractors of an affine iterated function system (IFS). We construct a sequence of convex hull approximations for any required precision using the self-similarity property of the attractor in order to optimize calculations. Due to the affine properties of IFS transformations, the number of points considered in the construction is reduced. The time complexity of our algorithm is a linear function of the number of iterations and the number of points in the output approximate convex hull. The number of iterations and the execution time increases logarithmically with increasing accuracy. In addition, we introduce a method to simplify the approximate convex hull without loss of accuracy.
Entropy coherent and entropy convex measures of risk
Laeven, Roger; Stadje, M.A.
2010-01-01
We introduce entropy coherent and entropy convex measures of risk and prove a collection of axiomatic characterization and duality results. We show in particular that entropy coherent and entropy convex measures of risk emerge as negative certainty equivalents in (the regular and a generalized
Convexity-preserving Bernstein–Bézier quartic scheme
Directory of Open Access Journals (Sweden)
Maria Hussain
2014-07-01
Full Text Available A C1 convex surface data interpolation scheme is presented to preserve the shape of scattered data arranged over a triangular grid. Bernstein–Bézier quartic function is used for interpolation. Lower bound of the boundary and inner Bézier ordinates is determined to guarantee convexity of surface. The developed scheme is flexible and involves more relaxed constraints.
Convergence of Algorithms for Reconstructing Convex Bodies and Directional Measures
DEFF Research Database (Denmark)
Gardner, Richard; Kiderlen, Markus; Milanfar, Peyman
2006-01-01
We investigate algorithms for reconstructing a convex body K in Rn from noisy measurements of its support function or its brightness function in k directions u1, . . . , uk. The key idea of these algorithms is to construct a convex polytope Pk whose support function (or brightness function) best...
On approximation and energy estimates for delta 6-convex functions.
Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid
2018-01-01
The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.
On approximation and energy estimates for delta 6-convex functions
Directory of Open Access Journals (Sweden)
Muhammad Shoaib Saleem
2018-02-01
Full Text Available Abstract The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted L2 $L^{2}$-norm.
STRICT CONVEXITY THROUGH EQUIVALENT NORMS IN SEPARABLES BANACH SPACES
Directory of Open Access Journals (Sweden)
Willy Zubiaga Vera
2016-12-01
Full Text Available Let E be a separable Banach space with norm || . ||. In the present work, the objective is to construct a norm || . ||1 that is equivalent to || . || in E, such that || . ||1 is strictly convex. In addition it is shown that its dual conjugate norm is also strictly convex.
Fundamentals of convex analysis duality, separation, representation, and resolution
Panik, Michael J
1993-01-01
Fundamentals of Convex Analysis offers an in-depth look at some of the fundamental themes covered within an area of mathematical analysis called convex analysis. In particular, it explores the topics of duality, separation, representation, and resolution. The work is intended for students of economics, management science, engineering, and mathematics who need exposure to the mathematical foundations of matrix games, optimization, and general equilibrium analysis. It is written at the advanced undergraduate to beginning graduate level and the only formal preparation required is some familiarity with set operations and with linear algebra and matrix theory. Fundamentals of Convex Analysis is self-contained in that a brief review of the essentials of these tool areas is provided in Chapter 1. Chapter exercises are also provided. Topics covered include: convex sets and their properties; separation and support theorems; theorems of the alternative; convex cones; dual homogeneous systems; basic solutions and comple...
Vacuum Predisperser For A Large Plane-Grating Spectrograph
Engleman, R.; Palmer, B. A.; Steinhaus, D. W.
1980-11-01
A plane grating predisperser has been constructed which acts as an "order-sorter" for a large plane-grating spectrograph. This combination can photograph relatively wide regions of spectra in a single exposure with no loss of resolution.
Second generation spectrograph for the Hubble Space Telescope
Woodgate, B. E.; Boggess, A.; Gull, T. R.; Heap, S. R.; Krueger, V. L.; Maran, S. P.; Melcher, R. W.; Rebar, F. J.; Vitagliano, H. D.; Green, R. F.; Wolff, S. C.; Hutchings, J. B.; Jenkins, E. B.; Linsky, J. L.; Moos, H. W.; Roesler, F.; Shine, R. A.; Timothy, J. G.; Weistrop, D. E.; Bottema, M.; Meyer, W.
1986-01-01
The preliminary design for the Space Telescope Imaging Spectrograph (STIS), which has been selected by NASA for definition study for future flight as a second-generation instrument on the Hubble Space Telescope (HST), is presented. STIS is a two-dimensional spectrograph that will operate from 1050 A to 11,000 A at the limiting HST resolution of 0.05 arcsec FWHM, with spectral resolutions of 100, 1200, 20,000, and 100,000 and a maximum field-of-view of 50 x 50 arcsec. Its basic operating modes include echelle model, long slit mode, slitless spectrograph mode, coronographic spectroscopy, photon time-tagging, and direct imaging. Research objectives are active galactic nuclei, the intergalactic medium, global properties of galaxies, the origin of stellar systems, stelalr spectral variability, and spectrographic mapping of solar system processes.
An integral field spectrograph utilizing mirrorlet arrays
Chamberlin, Phillip C.; Gong, Qian
2016-09-01
An integral field spectrograph (IFS) has been developed that utilizes a new and novel optical design to observe two spatial dimensions simultaneously with one spectral dimension. This design employs an optical 2-D array of reflecting and focusing mirrorlets. This mirrorlet array is placed at the imaging plane of the front-end telescope to generate a 2-D array of tiny spots replacing what would be the slit in a traditional slit spectrometer design. After the mirrorlet in the optical path, a grating on a concave mirror surface will image the spot array and provide high-resolution spectrum for each spatial element at the same time; therefore, the IFS simultaneously obtains the 3-D data cube of two spatial and one spectral dimensions. The new mirrorlet technology is currently in-house and undergoing laboratory testing at NASA Goddard Space Flight Center. Section 1 describes traditional classes of instruments that are used in Heliophysics missions and a quick introduction to the new IFS design. Section 2 discusses the details of the most generic mirrorlet IFS, while section 3 presents test results of a lab-based instrument. An example application to a Heliophysics mission to study solar eruptive events in extreme ultraviolet wavelengths is presented in section 4 that has high spatial resolution (0.5 arc sec pixels) in the two spatial dimensions and high spectral resolution (66 mÅ) across a 15 Å spectral window. Section 4 also concludes with some other optical variations that could be employed on the more basic IFS for further capabilities of this type of instrument.
An Integral Field Spectrograph Utilizing Mirrorlet Arrays
Chamberlin, Phillip C.; Gong, Qian
2016-01-01
An integral field spectrograph (IFS) has been developed that utilizes a new and novel optical design to observe two spatial dimensions simultaneously with one spectral dimension. This design employs an optical 2-D array of reflecting and focusing mirrorlets. This mirrorlet array is placed at the imaging plane of the front-end telescope to generate a 2-D array of tiny spots replacing what would be the slit in a traditional slit spectrometer design. After the mirrorlet in the optical path, a grating on a concave mirror surface will image the spot array and provide high-resolution spectrum for each spatial element at the same time; therefore, the IFS simultaneously obtains the 3-D data cube of two spatial and one spectral dimensions. The new mirrorlet technology is currently in-house and undergoing laboratory testing at NASA Goddard Space Flight Center. Section 1 describes traditional classes of instruments that are used in Heliophysics missions and a quick introduction to the new IFS design. Section 2 discusses the details of the most generic mirrorlet IFS, while section 3 presents test results of a lab-based instrument. An example application to a Heliophysics mission to study solar eruptive events in extreme ultraviolet wavelengths is presented in section 4 that has high spatial resolution (0.5 arc sec pixels) in the two spatial dimensions and high spectral resolution (66 m) across a 15 spectral window. Section 4 also concludes with some other optical variations that could be employed on the more basic IFS for further capabilities of this type of instrument.
Decomposability and convex structure of thermal processes
Mazurek, Paweł; Horodecki, Michał
2018-05-01
We present an example of a thermal process (TP) for a system of d energy levels, which cannot be performed without an instant access to the whole energy space. This TP is uniquely connected with a transition between some states of the system, that cannot be performed without access to the whole energy space even when approximate transitions are allowed. Pursuing the question about the decomposability of TPs into convex combinations of compositions of processes acting non-trivially on smaller subspaces, we investigate transitions within the subspace of states diagonal in the energy basis. For three level systems, we determine the set of extremal points of these operations, as well as the minimal set of operations needed to perform an arbitrary TP, and connect the set of TPs with thermomajorization criterion. We show that the structure of the set depends on temperature, which is associated with the fact that TPs cannot increase deterministically extractable work from a state—the conclusion that holds for arbitrary d level system. We also connect the decomposability problem with detailed balance symmetry of an extremal TPs.
Designing Camera Networks by Convex Quadratic Programming
Ghanem, Bernard
2015-05-04
In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport).
Visualizing Data as Objects by DC (Difference of Convex) Optimization
DEFF Research Database (Denmark)
Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero
2018-01-01
In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value, as convex objects. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization...... problem whose objective is the difference of two convex functions (DC). Suitable DC decompositions allow us to use the Difference of Convex Algorithm (DCA) in a very efficient way. Our algorithmic approach is used to visualize two real-world datasets....
The Mitchell Spectrograph: Studying Nearby Galaxies with the VIRUS Prototype
Directory of Open Access Journals (Sweden)
Guillermo A. Blanc
2013-01-01
Full Text Available The Mitchell Spectrograph (a.k.a. VIRUS-P on the 2.7 m Harlan J. Smith telescope at McDonald Observatory is currently the largest field of view (FOV integral field unit (IFU spectrograph in the world (1.7′×1.7′. It was designed as a prototype for the highly replicable VIRUS spectrograph which consists of a mosaic of IFUs spread over a 16′ diameter FOV feeding 150 spectrographs similar to the Mitchell. VIRUS will be deployed on the 9.2 meter Hobby-Eberly Telescope (HET and will be used to conduct the HET Dark Energy Experiment (HETDEX. Since seeing first light in 2007 the Mitchell Spectrograph has been widely used, among other things, to study nearby galaxies in the local universe where their internal structure and the spatial distribution of different physical parameters can be studied in great detail. These observations have provided important insight into many aspects of the physics behind the formation and evolution of galaxies and have boosted the scientific impact of the 2.7 meter telescope enormously. Here I review the contributions of the Mitchell Spectrograph to the study of nearby galaxies, from the investigation the spatial distribution of dark matter and the properties of supermassive black holes, to the studies of the process of star formation and the chemical composition of stars and gas in the ISM, which provide important information regarding the formation and evolution of these systems. I highlight the fact that wide field integral field spectrographs on small and medium size telescopes can be powerful cost effective tools to study the astrophysics of galaxies. Finally I briefly discuss the potential of HETDEX for conducting studies on nearby galaxies. The survey parameters make it complimentary and competitive to ongoing and future surveys like SAMI and MANGA.
A survey on locally uniformly A-convex algebras
International Nuclear Information System (INIS)
Oudadess, M.
1984-12-01
Using a bornological technic of M. Akkar, we reduce the study of classical questions (spectrum, boundedness of characters, functional calculus, etc.) in locally uniformly A-convex algebras to the Banach case. (author)
Lipschitz estimates for convex functions with respect to vector fields
Directory of Open Access Journals (Sweden)
Valentino Magnani
2012-12-01
Full Text Available We present Lipschitz continuity estimates for a class of convex functions with respect to Hörmander vector fields. These results have been recently obtained in collaboration with M. Scienza, [22].
A note on supercyclic operators in locally convex spaces
Albanese, Angela A.; Jornet, David
2018-01-01
We treat some questions related to supercyclicity of continuous linear operators when acting in locally convex spaces. We extend results of Ansari and Bourdon and consider doubly power bounded operators in this general setting. Some examples are given.
Convex solutions of systems arising from Monge-Ampere equations
Directory of Open Access Journals (Sweden)
Haiyan Wang
2009-10-01
Full Text Available We establish two criteria for the existence of convex solutions to a boundary value problem for weakly coupled systems arising from the Monge-Ampère equations. We shall use fixed point theorems in a cone.
Entropy and convexity for nonlinear partial differential equations.
Ball, John M; Chen, Gui-Qiang G
2013-12-28
Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue.
Displacement Convexity for First-Order Mean-Field Games
Seneci, Tommaso
2018-05-01
In this thesis, we consider the planning problem for first-order mean-field games (MFG). These games degenerate into optimal transport when there is no coupling between players. Our aim is to extend the concept of displacement convexity from optimal transport to MFGs. This extension gives new estimates for solutions of MFGs. First, we introduce the Monge-Kantorovich problem and examine related results on rearrangement maps. Next, we present the concept of displacement convexity. Then, we derive first-order MFGs, which are given by a system of a Hamilton-Jacobi equation coupled with a transport equation. Finally, we identify a large class of functions, that depend on solutions of MFGs, which are convex in time. Among these, we find several norms. This convexity gives bounds for the density of solutions of the planning problem.
Multi-objective convex programming problem arising in multivariate ...
African Journals Online (AJOL)
user
Multi-objective convex programming problem arising in ... However, although the consideration of multiple objectives may seem a novel concept, virtually any nontrivial ..... Solving multiobjective programming problems by discrete optimization.
Surgical treatment of convexity focal epilepsy
International Nuclear Information System (INIS)
Shimizu, Hiroyuki; Ishijima, Buichi; Iio, Masaaki.
1987-01-01
We have hitherto applied PET study in 72 epileptic patients. The main contents of their seizures consists of complex partial in 32, elementary partial in 32, generalized in 6, and others in 3 cases. We administered perorally 10 mCi glucose labeled with C11 produced in the JSW Baby Cyclotron for the study of CMRG(cerebral metabolic rate of glucose). The continuous inhalation method of CO 2 and O 2 labeled with O15 produced in the same cyclotron was also employed for measurement of rCBE(cerebral blood flow) and CMRO 2 (cerebral metabolic rate of oxygen). In both studies, epileptic foci were shown as well demarcated hypometabolic zones with decreased CMRG, rCBF or CMRO 2 . The locations of PET diagnosed foci were not contradictory with the clinical symptoms, scalp EEGs or X-ray CT findings. Of the 32 patients with the convexity epileptic foci, 8 patients underwent surgical treatment. Prior to the surgical intervention, subdural strip electrodes were inserted in the four cases for further assessment of focus locations. Subdural EEG disclosed very active brain activity with high amplitude 4 to 5 times scalp EEG and revealed epileptiform discharges most of which were not detected by scalp recording. PET scans did not characterize epileptogenic nature of a lesion. Subdural recording therefore was useful for detecting the foci responsible for habitual seizures in the cases with multiple PET foci. Ambiguous hypometabolic zones on PECT images also could be confirmed by the subdural technique. Of the 8 operated cases, five patients are seizure free, one is signigicantly improved and two are not improved although the postoperative follow-up is too short for precise evaluation. (J.P.N.)
Efficiency and Generalized Convex Duality for Nondifferentiable Multiobjective Programs
Directory of Open Access Journals (Sweden)
Bae KwanDeok
2010-01-01
Full Text Available We introduce nondifferentiable multiobjective programming problems involving the support function of a compact convex set and linear functions. The concept of (properly efficient solutions are presented. We formulate Mond-Weir-type and Wolfe-type dual problems and establish weak and strong duality theorems for efficient solutions by using suitable generalized convexity conditions. Some special cases of our duality results are given.
Two examples of non strictly convex large deviations
De Marco, Stefano; Jacquier, Antoine; Roome, Patrick
2016-01-01
We present two examples of a large deviations principle where the rate function is not strictly convex. This is motivated by a model used in mathematical finance (the Heston model), and adds a new item to the zoology of non strictly convex large deviations. For one of these examples, we show that the rate function of the Cramer-type of large deviations coincides with that of the Freidlin-Wentzell when contraction principles are applied.
Dislocation dynamics in non-convex domains using finite elements with embedded discontinuities
Romero, Ignacio; Segurado, Javier; LLorca, Javier
2008-04-01
The standard strategy developed by Van der Giessen and Needleman (1995 Modelling Simul. Mater. Sci. Eng. 3 689) to simulate dislocation dynamics in two-dimensional finite domains was modified to account for the effect of dislocations leaving the crystal through a free surface in the case of arbitrary non-convex domains. The new approach incorporates the displacement jumps across the slip segments of the dislocations that have exited the crystal within the finite element analysis carried out to compute the image stresses on the dislocations due to the finite boundaries. This is done in a simple computationally efficient way by embedding the discontinuities in the finite element solution, a strategy often used in the numerical simulation of crack propagation in solids. Two academic examples are presented to validate and demonstrate the extended model and its implementation within a finite element program is detailed in the appendix.
Dislocation dynamics in non-convex domains using finite elements with embedded discontinuities
International Nuclear Information System (INIS)
Romero, Ignacio; Segurado, Javier; LLorca, Javier
2008-01-01
The standard strategy developed by Van der Giessen and Needleman (1995 Modelling Simul. Mater. Sci. Eng. 3 689) to simulate dislocation dynamics in two-dimensional finite domains was modified to account for the effect of dislocations leaving the crystal through a free surface in the case of arbitrary non-convex domains. The new approach incorporates the displacement jumps across the slip segments of the dislocations that have exited the crystal within the finite element analysis carried out to compute the image stresses on the dislocations due to the finite boundaries. This is done in a simple computationally efficient way by embedding the discontinuities in the finite element solution, a strategy often used in the numerical simulation of crack propagation in solids. Two academic examples are presented to validate and demonstrate the extended model and its implementation within a finite element program is detailed in the appendix
Decompositions, partitions, and coverings with convex polygons and pseudo-triangles
Aichholzer, O.; Huemer, C.; Kappes, S.; Speckmann, B.; Tóth, Cs.D.
2007-01-01
We propose a novel subdivision of the plane that consists of both convex polygons and pseudo-triangles. This pseudo-convex decomposition is significantly sparser than either convex decompositions or pseudo-triangulations for planar point sets and simple polygons. We also introduce pseudo-convex
Surgery for convexity/parasagittal/falx meningiomas
International Nuclear Information System (INIS)
Ochi, Takashi; Saito, Nobuhito
2013-01-01
Incidence of the complication related with the surgical treatment of meningiomas in the title was reviewed together with consideration of data about progress observation and stereotactic radiosurgery. MEDLINE papers in English were on line searched with keywords contained in above using PubMed System. For the convexity meningioma, 50-141 cases (mean age, 48-58.9 y) with 1.9-3.6 cm or 146.3 mL of the tumor size or volume were reported in 6 literatures (2006-2011), presenting 0% of surgery related death, 1-5.9% of internal medical or 5.5-37.4% of surgical complication, 0-2% of postoperative hemorrhage, 0-15.4% of neurological and 0-15.4% of prolonged/permanent deficits. For the parasagittal/falx meningioma, 46-108 cases (age, 55-58 y) with 1.9-4 cm tumor were reported in 8 literatures (2004-2011), presenting 0-5.7% death, 2-7.4% medical or 5.4-31% surgical complication, 0-3% hemorrhage, 0-15.4 neurologic and 0-15.4% prolonged deficits. For complications after the radiosurgery of the all 3 meningiomas, 41-832 cases (50-60 y) with tumors of 24.7-28 mm or 4.7-7.4 mL were reported in 8 literatures (2003-2012), presenting the incidence of 6.8-26.8% of radiation-related complications like headache, seizures and paralysis necessary for steroid treatment, and 1.20 or 4.80% of permanent morbidity. For the natural history of incidental meningiomas involving tentorium one, 16-144 cases in 6 literatures (2000-2012) revealed the growth rate/y of 1.9-3.9 mm or 0.54-1.15 mL. The outcome of surgical treatment of the meningiomas, a representative benign tumor, was concluded to be rather good as surgery was generally needed only when the disease became symptomatic due to the tumor growth. (T.T.)
Spectrographic Determination of Trace Constituents in Rare Earths
International Nuclear Information System (INIS)
Capdevila, C.; Alvarez, F.
1962-01-01
A spectrographic method was developed for the determination of 18 trace elements in lanthanum, cerium, praseodimium, neodimium and samarium compounds. The concentrations of the impurities cover the range of 0,5 to 500 ppm. Most of these impurities are determined by the carrier distillation method. Several more refractory elements have been determined by total burning of the sample with a direct current arc or by the conduction briquet excitation technique with a high voltage condensed spark. The work has been carried out with a Hilger Automatic Large Quartz Spectrograph. (Author) 5 refs
Using a new, free spectrograph program to critically investigate acoustics
Ball, Edward; Ruiz, Michael J.
2016-11-01
We have developed an online spectrograph program with a bank of over 30 audio clips to visualise a variety of sounds. Our audio library includes everyday sounds such as speech, singing, musical instruments, birds, a baby, cat, dog, sirens, a jet, thunder, and screaming. We provide a link to a video of the sound sources superimposed with their respective spectrograms in real time. Readers can use our spectrograph program to view our library, open their own desktop audio files, and use the program in real time with a computer microphone.
Lead shielded cells for the spectrographic analysis of radioisotope solutions
International Nuclear Information System (INIS)
Roca, M.; Capdevila, C.; Cruz, F. de la
1967-01-01
Two lead shielded cells for the spectrochemical analysis of radioisotope samples are described. One of them is devoted to the evaporation of samples before excitation and the other one contains a suitable spectrographic excitation stand for the copper spark technique. A special device makes it possible the easy displacement of the excitation cell on wheels and rails for its accurate and reproducible position as well as its replacement by a glove box for plutonium analysis. In order to guarantee safety the room in which the spectrograph and the source are set up in separated from the active laboratory by a wall with a suitable window. (Author) 1 refs
Convex unwraps its first grown-up supercomputer
Energy Technology Data Exchange (ETDEWEB)
Manuel, T.
1988-03-03
Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.
Inhibitory competition in figure-ground perception: context and convexity.
Peterson, Mary A; Salvagio, Elizabeth
2008-12-15
Convexity has long been considered a potent cue as to which of two regions on opposite sides of an edge is the shaped figure. Experiment 1 shows that for a single edge, there is only a weak bias toward seeing the figure on the convex side. Experiments 1-3 show that the bias toward seeing the convex side as figure increases as the number of edges delimiting alternating convex and concave regions increases, provided that the concave regions are homogeneous in color. The results of Experiments 2 and 3 rule out a probability summation explanation for these context effects. Taken together, the results of Experiments 1-3 show that the homogeneity versus heterogeneity of the convex regions is irrelevant. Experiment 4 shows that homogeneity of alternating regions is not sufficient for context effects; a cue that favors the perception of the intervening regions as figures is necessary. Thus homogeneity alone does not alone operate as a background cue. We interpret our results within a model of figure-ground perception in which shape properties on opposite sides of an edge compete for representation and the competitive strength of weak competitors is further reduced when they are homogeneous.
WAS: the data archive for the WEAVE spectrograph
Guerra, Jose; Molinari, Emilio; Lodi, Marcello; Martin, Adrian; Dalton, Gavin B.; Trager, Scott C.; Jin, Shoko; Abrams, Don Carlos; Bonifacio, Piercarlo; López Aguerri, Jose Alfonso; Vallenari, Antonella; Carrasco Licea, Esperanza E.; Middleton, Kevin F.
2016-01-01
The WAS1(WEAVE Archive System) is a software architecture for archiving and delivering the data releases for the WEAVE7 instrument at WHT (William Herschel Telescope). The WEAVE spectrograph will be mounted at the 4.2-m WHT telescope and will provide millions of spectra in a 5-year program, starting
Spectrographic determination of impurities in copper and copper oxide
International Nuclear Information System (INIS)
Sabato, S.F.; Lordello, A.R.
1990-11-01
An emission spectrographic method for the determination of Al, Bi, Ca, Cd, Cr, Fe, Ge, Mg, Mn, Mo, Ni, Pb, Sb, Si, Sn and Zn in copper and copper oxide is described. Two mixtures (Graphite and ZnO: graphite and GeO sub(2)) were used as buffers. The standard deviation lies around 10%. (author)
The spectrographic orbit of the eclipsing binary HH Carinae
International Nuclear Information System (INIS)
Mandrini, C.H.; Mendez, R.H.; Niemela, V.S.; Ferrer, O.E.
1985-01-01
We present a radial velocity study of the eclipsing binary system HH Carinae, and determine for the first time its spectrographic orbital elements. Using the results of a previous photometric study by Soderhjelm, we also determine the values of the masses and dimensions of the binary components. (author)
Spectrographical method for determining temperature variations of cosmic rays
International Nuclear Information System (INIS)
Dorman, L.I.; Krest'yannikov, Yu.Ya.; AN SSSR, Irkutsk. Sibirskij Inst. Zemnogo Magnetizma Ionosfery i Rasprostraneniya Radiovoln)
1977-01-01
A spectrographic method for determining [sigmaJsup(μ)/Jsup(μ)]sub(T) temperature variations in cosmic rays is proposed. The value of (sigmaJsup(μ)/Jsup(μ)]sub(T) is determined from three equations for neutron supermonitors and the equation for the muon component of cosmic rays. It is assumed that all the observation data include corrections for the barometric effect. No temperature effect is observed in the neutron component. To improve the reliability and accuracy of the results obtained the surface area of the existing devices and the number of spectrographic equations should be increased as compared with that of the unknown values. The value of [sigmaJsup(μ)/Jsup(μ)]sub(T) for time instants when the aerological probing was carried out, was determined from the data of observations of cosmic rays with the aid of a spectrographic complex of devices of Sib IZMIR. The r.m.s. dispersion of the difference is about 0.2%, which agrees with the expected dispersion. The agreement obtained can be regarded as an independent proof of the correctness of the theory of meteorological effects of cosmic rays. With the existing detection accuracy the spectrographic method can be used for determining the hourly values of temperature corrections for the muon component
Detection Of Alterations In Audio Files Using Spectrograph Analysis
Directory of Open Access Journals (Sweden)
Anandha Krishnan G
2015-08-01
Full Text Available The corresponding study was carried out to detect changes in audio file using spectrograph. An audio file format is a file format for storing digital audio data on a computer system. A sound spectrograph is a laboratory instrument that displays a graphical representation of the strengths of the various component frequencies of a sound as time passes. The objectives of the study were to find the changes in spectrograph of audio after altering them to compare altering changes with spectrograph of original files and to check for similarity and difference in mp3 and wav. Five different alterations were carried out on each audio file to analyze the differences between the original and the altered file. For altering the audio file MP3 or WAV by cutcopy the file was opened in Audacity. A different audio was then pasted to the audio file. This new file was analyzed to view the differences. By adjusting the necessary parameters the noise was reduced. The differences between the new file and the original file were analyzed. By adjusting the parameters from the dialog box the necessary changes were made. The edited audio file was opened in the software named spek where after analyzing a graph is obtained of that particular file which is saved for further analysis. The original audio graph received was combined with the edited audio file graph to see the alterations.
Spectrographic determination of lithium in nuclear grade calcium
International Nuclear Information System (INIS)
Artaud, J.; Cittanova, J.
1957-01-01
A method is described for the spectrographic determination of lithium in calcium. The samples are converted directly to CaCO 3 . A method of fractional distillation in the arc, using KCl as carrier, makes it possible to detect and measure the Li content to 0,1 ppm. (author) [fr
Dose evaluation from multiple detector outputs using convex optimisation
International Nuclear Information System (INIS)
Hashimoto, M.; Iimoto, T.; Kosako, T.
2011-01-01
A dose evaluation using multiple radiation detectors can be improved by the convex optimisation method. It enables flexible dose evaluation corresponding to the actual radiation energy spectrum. An application to the neutron ambient dose equivalent evaluation is investigated using a mixed-gas proportional counter. The convex derives the certain neutron ambient dose with certain width corresponding to the true neutron energy spectrum. The range of the evaluated dose is comparable to the error of conventional neutron dose measurement equipments. An application to the neutron individual dose equivalent measurement is also investigated. Convexes of particular dosemeter combinations evaluate the individual dose equivalent better than the dose evaluation of a single dosemeter. The combinations of dosemeters with high orthogonality of their response characteristics tend to provide a good suitability for dose evaluation. (authors)
Convexity and concavity constants in Lorentz and Marcinkiewicz spaces
Kaminska, Anna; Parrish, Anca M.
2008-07-01
We provide here the formulas for the q-convexity and q-concavity constants for function and sequence Lorentz spaces associated to either decreasing or increasing weights. It yields also the formula for the q-convexity constants in function and sequence Marcinkiewicz spaces. In this paper we extent and enhance the results from [G.J.O. Jameson, The q-concavity constants of Lorentz sequence spaces and related inequalities, Math. Z. 227 (1998) 129-142] and [A. Kaminska, A.M. Parrish, The q-concavity and q-convexity constants in Lorentz spaces, in: Banach Spaces and Their Applications in Analysis, Conference in Honor of Nigel Kalton, May 2006, Walter de Gruyter, Berlin, 2007, pp. 357-373].
Transient disturbance growth in flows over convex surfaces
Karp, Michael; Hack, M. J. Philipp
2017-11-01
Flows over curved surfaces occur in a wide range of applications including airfoils, compressor and turbine vanes as well as aerial, naval and ground vehicles. In most of these applications the surface has convex curvature, while concave surfaces are less common. Since monotonic boundary-layer flows over convex surfaces are exponentially stable, they have received considerably less attention than flows over concave walls which are destabilized by centrifugal forces. Non-modal mechanisms may nonetheless enable significant disturbance growth which can make the flow susceptible to secondary instabilities. A parametric investigation of the transient growth and secondary instability of flows over convex surfaces is performed. The specific conditions yielding the maximal transient growth and strongest instability are identified. The effect of wall-normal and spanwise inflection points on the instability process is discussed. Finally, the role and significance of additional parameters, such as the geometry and pressure gradient, is analyzed.
A working-set framework for sequential convex approximation methods
DEFF Research Database (Denmark)
Stolpe, Mathias
2008-01-01
We present an active-set algorithmic framework intended as an extension to existing implementations of sequential convex approximation methods for solving nonlinear inequality constrained programs. The framework is independent of the choice of approximations and the stabilization technique used...... to guarantee global convergence of the method. The algorithm works directly on the nonlinear constraints in the convex sub-problems and solves a sequence of relaxations of the current sub-problem. The algorithm terminates with the optimal solution to the sub-problem after solving a finite number of relaxations....
Convex Hull Abstraction in Specialisation of CLP Programs
DEFF Research Database (Denmark)
Peralta, J.C.; Gallagher, John Patrick
2003-01-01
We introduce an abstract domain consisting of atomic formulas constrained by linear arithmetic constraints (or convex hulls). This domain is used in an algorithm for specialization of constraint logic programs. The algorithm incorporates in a single phase both top-down goal directed propagation...... and bottom-up answer propagation, and uses a widening on the convex hull domain to ensure termination. We give examples to show the precision gained by this approach over other methods in the literature for specializing constraint logic programs. The specialization method can also be used for ordinary logic...
Closedness type regularity conditions in convex optimization and beyond
Directory of Open Access Journals (Sweden)
Sorin-Mihai Grad
2016-09-01
Full Text Available The closedness type regularity conditions have proven during the last decade to be viable alternatives to their more restrictive interiority type counterparts, in both convex optimization and different areas where it was successfully applied. In this review article we de- and reconstruct some closedness type regularity conditions formulated by means of epigraphs and subdifferentials, respectively, for general optimization problems in order to stress that they arise naturally when dealing with such problems. The results are then specialized for constrained and unconstrained convex optimization problems. We also hint towards other classes of optimization problems where closedness type regularity conditions were successfully employed and discuss other possible applications of them.
Distribution functions of sections and projections of convex bodies
Kim, Jaegil; Yaskin, Vladyslav; Zvavitch, Artem
2015-01-01
Typically, when we are given the section (or projection) function of a convex body, it means that in each direction we know the size of the central section (or projection) perpendicular to this direction. Suppose now that we can only get the information about the sizes of sections (or projections), and not about the corresponding directions. In this paper we study to what extent the distribution function of the areas of central sections (or projections) of a convex body can be used to derive ...
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2013-08-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.
Relaxation Methods for Strictly Convex Regularizations of Piecewise Linear Programs
International Nuclear Information System (INIS)
Kiwiel, K. C.
1998-01-01
We give an algorithm for minimizing the sum of a strictly convex function and a convex piecewise linear function. It extends several dual coordinate ascent methods for large-scale linearly constrained problems that occur in entropy maximization, quadratic programming, and network flows. In particular, it may solve exact penalty versions of such (possibly inconsistent) problems, and subproblems of bundle methods for nondifferentiable optimization. It is simple, can exploit sparsity, and in certain cases is highly parallelizable. Its global convergence is established in the recent framework of B -functions (generalized Bregman functions)
The selection problem for discounted Hamilton–Jacobi equations: some non-convex cases
Gomes, Diogo A.; Mitake, Hiroyoshi; Tran, Hung V.
2018-01-01
Here, we study the selection problem for the vanishing discount approximation of non-convex, first-order Hamilton–Jacobi equations. While the selection problem is well understood for convex Hamiltonians, the selection problem for non-convex Hamiltonians has thus far not been studied. We begin our study by examining a generalized discounted Hamilton–Jacobi equation. Next, using an exponential transformation, we apply our methods to strictly quasi-convex and to some non-convex Hamilton–Jacobi equations. Finally, we examine a non-convex Hamiltonian with flat parts to which our results do not directly apply. In this case, we establish the convergence by a direct approach.
The selection problem for discounted Hamilton–Jacobi equations: some non-convex cases
Gomes, Diogo A.
2018-01-26
Here, we study the selection problem for the vanishing discount approximation of non-convex, first-order Hamilton–Jacobi equations. While the selection problem is well understood for convex Hamiltonians, the selection problem for non-convex Hamiltonians has thus far not been studied. We begin our study by examining a generalized discounted Hamilton–Jacobi equation. Next, using an exponential transformation, we apply our methods to strictly quasi-convex and to some non-convex Hamilton–Jacobi equations. Finally, we examine a non-convex Hamiltonian with flat parts to which our results do not directly apply. In this case, we establish the convergence by a direct approach.
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available We consider the problem of minimizing a convex separable logarithmic function over a region defined by a convex inequality constraint or linear equality constraint, and two-sided bounds on the variables (box constraints. Such problems are interesting from both theoretical and practical point of view because they arise in some mathematical programming problems as well as in various practical problems such as problems of production planning and scheduling, allocation of resources, decision making, facility location problems, and so forth. Polynomial algorithms are proposed for solving problems of this form and their convergence is proved. Some examples and results of numerical experiments are also presented.
On the polarizability dyadics of electrically small, convex objects
Lakhtakia, Akhlesh
1993-11-01
This communication on the polarizability dyadics of electrically small objects of convex shapes has been prompted by a recent paper published by Sihvola and Lindell on the polarizability dyadic of an electrically gyrotropic sphere. A mini-review of recent work on polarizability dyadics is appended.
Riemann solvers and undercompressive shocks of convex FPU chains
International Nuclear Information System (INIS)
Herrmann, Michael; Rademacher, Jens D M
2010-01-01
We consider FPU-type atomic chains with general convex potentials. The naive continuum limit in the hyperbolic space–time scaling is the p-system of mass and momentum conservation. We systematically compare Riemann solutions to the p-system with numerical solutions to discrete Riemann problems in FPU chains, and argue that the latter can be described by modified p-system Riemann solvers. We allow the flux to have a turning point, and observe a third type of elementary wave (conservative shocks) in the atomistic simulations. These waves are heteroclinic travelling waves and correspond to non-classical, undercompressive shocks of the p-system. We analyse such shocks for fluxes with one or more turning points. Depending on the convexity properties of the flux we propose FPU-Riemann solvers. Our numerical simulations confirm that Lax shocks are replaced by so-called dispersive shocks. For convex–concave flux we provide numerical evidence that convex FPU chains follow the p-system in generating conservative shocks that are supersonic. For concave–convex flux, however, the conservative shocks of the p-system are subsonic and do not appear in FPU-Riemann solutions
On the Convexity of Step out - Step in Sequencing Games
Musegaas, Marieke; Borm, Peter; Quant, Marieke
2016-01-01
The main result of this paper is the convexity of Step out - Step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas, Borm, and Quant (2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an
Preconditioning 2D Integer Data for Fast Convex Hull Computations.
Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
Convex relationships in ecosystems containing mixtures of trees and grass
CSIR Research Space (South Africa)
Scholes, RJ
2003-12-01
Full Text Available The relationship between grass production and the quantity of trees in mixed tree-grass ecosystems (savannas) is convex for all or most of its range. In other words, the grass production declines more steeply per unit increase in tree quantity...
Positive definite functions and dual pairs of locally convex spaces
Directory of Open Access Journals (Sweden)
Daniel Alpay
2018-01-01
Full Text Available Using pairs of locally convex topological vector spaces in duality and topologies defined by directed families of sets bounded with respect to the duality, we prove general factorization theorems and general dilation theorems for operator-valued positive definite functions.
Intracranial Convexity Lipoma with Massive Calcification: Case Report
Energy Technology Data Exchange (ETDEWEB)
Kim, Eung Tae; Park, Dong Woo; Ryu, Jeong Ah; Park, Choong Ki; Lee, Young Jun; Lee, Seung Ro [Dept. of Radiology, Hanyang University College of Medicine, Seoul (Korea, Republic of)
2011-12-15
Intracranial lipoma is a rare entity, accounting for less than 0.5% of intracranial tumors, which usually develops in the callosal cisterns. We report a case of lipoma with an unusual location; in the high parietal convexity combined with massive calcification, and no underlying vascular malformation or congenital anomaly.
A duality recipe for non-convex variational problems
Bouchitté, Guy; Phan, Minh
2018-03-01
The aim of this paper is to present a general convexification recipe that can be useful for studying non-convex variational problems. In particular, this allows us to treat such problems by using a powerful primal-dual scheme. Possible further developments and open issues are given. xml:lang="fr"
A note on the nucleolus for 2-convex TU games
Driessen, Theo; Hou, D.
For 2-convex n-person cooperative TU games, the nucleolus is determined as some type of constrained equal award rule. Its proof is based on Maschler, Peleg, and Shapley’s geometrical characterization for the intersection of the prekernel with the core. Pairwise bargaining ranges within the core are
A convex optimization approach for solving large scale linear systems
Directory of Open Access Journals (Sweden)
Debora Cores
2017-01-01
Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.
Transonic shock wave. Boundary layer interaction at a convex wall
Koren, B.; Bannink, W.J.
1984-01-01
A standard finite element procedure has been applied to the problem of transonic shock wave – boundary layer interaction at a convex wall. The method is based on the analytical Bohning-Zierep model, where the boundary layer is perturbed by a weak normal shock wave which shows a singular pressure
Computing Convex Coverage Sets for Faster Multi-Objective Coordination
Roijers, D.M.; Whiteson, S.; Oliehoek, F.A.
2015-01-01
In this article, we propose new algorithms for multi-objective coordination graphs (MO-CoGs). Key to the efficiency of these algorithms is that they compute a convex coverage set (CCS) instead of a Pareto coverage set (PCS). Not only is a CCS a sufficient solution set for a large class of problems,
Flat tori in three-dimensional space and convex integration.
Borrelli, Vincent; Jabrane, Saïd; Lazarus, Francis; Thibert, Boris
2012-05-08
It is well-known that the curvature tensor is an isometric invariant of C(2) Riemannian manifolds. This invariant is at the origin of the rigidity observed in Riemannian geometry. In the mid 1950s, Nash amazed the world mathematical community by showing that this rigidity breaks down in regularity C(1). This unexpected flexibility has many paradoxical consequences, one of them is the existence of C(1) isometric embeddings of flat tori into Euclidean three-dimensional space. In the 1970s and 1980s, M. Gromov, revisiting Nash's results introduced convex integration theory offering a general framework to solve this type of geometric problems. In this research, we convert convex integration theory into an algorithm that produces isometric maps of flat tori. We provide an implementation of a convex integration process leading to images of an embedding of a flat torus. The resulting surface reveals a C(1) fractal structure: Although the tangent plane is defined everywhere, the normal vector exhibits a fractal behavior. Isometric embeddings of flat tori may thus appear as a geometric occurrence of a structure that is simultaneously C(1) and fractal. Beyond these results, our implementation demonstrates that convex integration, a theory still confined to specialists, can produce computationally tractable solutions of partial differential relations.
An echelle spectrograph for middle ultraviolet solar spectroscopy from rockets.
Tousey, R; Purcell, J D; Garrett, D L
1967-03-01
An echelle grating spectrograph is ideal for use in a rocket when high resolution is required becaus itoccupies a minimum of space. The instrument described covers the range 4000-2000 A with a resolution of 0.03 A. It was designed to fit into the solar biaxial pointing-control section of an Aerobee-150 rocket. The characteristics of the spectrograph are illustrated with laboratory spectra of iron and carbon are sources and with solar spectra obtained during rocket flights in 1961 and 1964. Problems encountered in analyzing the spectra are discussed. The most difficult design problem was the elimination of stray light when used with the sun. Of the several methods investigated, the most effective was a predispersing system in the form of a zero-dispersion double monochromator. This was made compact by folding the beam four times.
Spectrographic determination of impurities in uranium tetrafluoride matrices
International Nuclear Information System (INIS)
Reino, Luiz Carlos de Paula
1980-01-01
A direct spectrographic method for the determination of UF 4 impurities was developed. Investigations using spectrochemical carriers were carried out so to avoid uranium distillation, which as fluoride is much more volatile than the U 3 O 8 refractory matrix. The best results were obtained by using a mixture of MgO and NaCl carriers in the proportion of 20% and 10%, respectively, with respect to UF 4 matrix. An original spectrographic technique was introduced aiming to avoid the projection of sample particles outside the electrode during excitation. This new technique is based on the addition of a small quantity of a 0.5% gelatinous solution on the UF 4 tablet. The precision of the method was studied for each element analysed. The variation coefficients are within the range of 10 of 20%
Ultraviolet spectrographs for thermospheric and ionospheric remote sensing
International Nuclear Information System (INIS)
Dymond, K.F.; McCoy, R.P.
1993-01-01
The Naval Research Laboratory (NRL) has been developing far- and extreme-ultraviolet spectrographs for remote sensing the Earth's upper atmosphere and ionosphere. The first of these sensors, called the Special Sensor Ultraviolet Limb Imager (SSULI), will be flying on the Air Force's Defense Meteorological Satellite Program (DMSP) block 5D3 satellites as an operational sensor in the 1997-2010 time frame. A second sensor, called the High-resolution ionospheric and Thermospheric Spectrograph (HITS), will fly in late 1995 on the Air Force Space Test Program's Advanced Research and Global Observation Satellite (ARGOS, also known as P91-1) as part of NRL's High Resolution Airglow and Auroral Spectroscopy (HIRAAS) experiment. Both of these instruments are compact and do not draw much power and would be good candidates for small satellite applications. The instruments and their capabilities are discussed. Possible uses of these instruments in small satellite applications are also presented
Spectrographic determination of trace impurities in reactor grade aluminium
International Nuclear Information System (INIS)
Chandola, L.C.; Machado, I.J.
1975-01-01
A spectrographic method enabling the determination of 21 trace impurities in aluminium oxide is described. The technique involves mixing the sample with graphite buffer in the ratio 1:1, loading it in a graphite electrode and arcing it for 30 sec. in a dc arc to 10 A current against a pointed graphite cathode. The spectra are photographed on Ilford N.30 emulsion employing a large quartz spectrograph. The aluminium line at 2669.2 A 0 serves as the internal standard. The impurities determined are Ag, B, Bi, Cd, Co, Cr, Cu, Fe, Ga, In, Mg, Mo, Ni, Pb, Sb, Si, Sn, Ti, V and Zn. The sensitivity varies from 5 to 100 ppm and the precision from +- 5 to +- 22% for different elements. A method for converting aluminium metal to aluminium oxide is described. It is found that boron is not lost during this conversion. (author)
Spectrographic determination of impurities in uranium tetrafluoride matrices
International Nuclear Information System (INIS)
Reino, L.C.P.; Lordello, A.R.
1980-01-01
A direct spectrographic method for the determination of UF 4 impurities was developed. Investigations using spectrochemical carriers were carried out so to avoid uranium distillation, which as fluoride is much more volatile than the U 3 O 8 refractory matrix. The best results were obtained by using a mixture of MgO and NaCl carriers in the proportion of 20 and 10%, respectively, with respect to UF 4 matrix. An original spectrographic technique was introduced aiming to avoid the projection of sample particles outside the electrode during excitation. This new technique is based on the addition of a small quantity of a 0.5% gellatinous solution on the UF 4 tablet. The precision of the method was studied for each element analysed. The variation coefficients are within the range of 10 of 20%. (C.L.B.) [pt
Proton polarimetry using an Enge split-pole spectrograph
Energy Technology Data Exchange (ETDEWEB)
Moss, J M; Brown, D R; Cornelius, W D [Texas Agricultural and Mechanical Univ., College Station (USA). Cyclotron Inst.
1976-05-15
A high-efficiency (4 x 10/sup -5/ at A=0.4) high resolution (150 keV) polarimeter used in conjunction with an Enge split-pole spectrograph is described. This device permits for the first time polarization transfer studies in elastic scattering. Spectra are shown for /sup 11/B(p(pol),p(pol)')/sup 11/B (2.14 MeV)at Esub(p)=31 MeV.
A CCD fitted to the UV Prime spectrograph: Performance
International Nuclear Information System (INIS)
Boulade, O.
1986-10-01
A CCD camera was fitted to the 3.6 m French-Canadian telescope in Hawai. Performance of the system and observations of elliptic galaxies (stellar content and galactic evolution in a cluster) and quasars (absorption lines in spectra) are reported. In spite of its resolution being only average, the extremely rapid optics of the UV spectrograph gives good signal to noise ratios enabling redshifts and velocity scatter to be calculated with an accuracy better than 30 km/sec [fr
Short Run Profit Maximization in a Convex Analysis Framework
Directory of Open Access Journals (Sweden)
Ilko Vrankic
2017-03-01
Full Text Available In this article we analyse the short run profit maximization problem in a convex analysis framework. The goal is to apply the results of convex analysis due to unique structure of microeconomic phenomena on the known short run profit maximization problem where the results from convex analysis are deductively applied. In the primal optimization model the technology in the short run is represented by the short run production function and the normalized profit function, which expresses profit in the output units, is derived. In this approach the choice variable is the labour quantity. Alternatively, technology is represented by the real variable cost function, where costs are expressed in the labour units, and the normalized profit function is derived, this time expressing profit in the labour units. The choice variable in this approach is the quantity of production. The emphasis in these two perspectives of the primal approach is given to the first order necessary conditions of both models which are the consequence of enveloping the closed convex set describing technology with its tangents. The dual model includes starting from the normalized profit function and recovering the production function, and alternatively the real variable cost function. In the first perspective of the dual approach the choice variable is the real wage, and in the second it is the real product price expressed in the labour units. It is shown that the change of variables into parameters and parameters into variables leads to both optimization models which give the same system of labour demand and product supply functions and their inverses. By deductively applying the results of convex analysis the comparative statics results are derived describing the firm's behaviour in the short run.
Solar glint suppression in compact planetary ultraviolet spectrographs
Davis, Michael W.; Cook, Jason C.; Grava, Cesare; Greathouse, Thomas K.; Gladstone, G. Randall; Retherford, Kurt D.
2015-08-01
Solar glint suppression is an important consideration in the design of compact photon-counting ultraviolet spectrographs. Southwest Research Institute developed the Lyman Alpha Mapping Project for the Lunar Reconnaissance Orbiter (launch in 2009), and the Ultraviolet Spectrograph on Juno (Juno-UVS, launch in 2011). Both of these compact spectrographs revealed minor solar glints in flight that did not appear in pre-launch analyses. These glints only appeared when their respective spacecraft were operating outside primary science mission parameters. Post-facto scattered light analysis verifies the geometries at which these glints occurred and why they were not caught during ground testing or nominal mission operations. The limitations of standard baffle design at near-grazing angles are discussed, as well as the importance of including surface scatter properties in standard stray light analyses when determining solar keep-out efficiency. In particular, the scattered light analysis of these two instruments shows that standard "one bounce" assumptions in baffle design are not always enough to prevent scattered sunlight from reaching the instrument focal plane. Future builds, such as JUICE-UVS, will implement improved scattered and stray light modeling early in the design phase to enhance capabilities in extended mission science phases, as well as optimize solar keep out volume.
SPRAT: Spectrograph for the Rapid Acquisition of Transients
Piascik, A. S.; Steele, Iain A.; Bates, Stuart D.; Mottram, Christopher J.; Smith, R. J.; Barnsley, R. M.; Bolton, B.
2014-07-01
We describe the development of a low cost, low resolution (R ~ 350), high throughput, long slit spectrograph covering visible (4000-8000) wavelengths. The spectrograph has been developed for fully robotic operation with the Liverpool Telescope (La Palma). The primary aim is to provide rapid spectral classification of faint (V ˜ 20) transient objects detected by projects such as Gaia, iPTF (intermediate Palomar Transient Factory), LOFAR, and a variety of high energy satellites. The design employs a volume phase holographic (VPH) transmission grating as the dispersive element combined with a prism pair (grism) in a linear optical path. One of two peak spectral sensitivities are selectable by rotating the grism. The VPH and prism combination and entrance slit are deployable, and when removed from the beam allow the collimator/camera pair to re-image the target field onto the detector. This mode of operation provides automatic acquisition of the target onto the slit prior to spectrographic observation through World Coordinate System fitting. The selection and characterisation of optical components to maximise photon throughput is described together with performance predictions.
Hermite-Hadamard type inequality for φ{sub h}-convex stochastic processes
Energy Technology Data Exchange (ETDEWEB)
Sarıkaya, Mehmet Zeki, E-mail: sarikayamz@gmail.com [Department of Mathematics, Faculty of Science and Arts, Düzce University, Düzce (Turkey); Kiriş, Mehmet Eyüp, E-mail: kiris@aku.edu.tr [Department of Mathematics, Institute of Science and Arts, Afyon Kocatepe University, Afyonkarahisar (Turkey); Çelik, Nuri, E-mail: ncelik@bartin.edu.tr [Department of Statistics, Faculty of Science, Bartın University, Bartın-Turkey (Turkey)
2016-04-18
The main aim of the present paper is to introduce φ{sub h}-convex stochastic processes and we investigate main properties of these mappings. Moreover, we prove the Hadamard-type inequalities for φ{sub h}-convex stochastic processes. We also give some new general inequalities for φ{sub h}-convex stochastic processes.
A Convex Optimization Model and Algorithm for Retinex
Directory of Open Access Journals (Sweden)
Qing-Nan Zhao
2017-01-01
Full Text Available Retinex is a theory on simulating and explaining how human visual system perceives colors under different illumination conditions. The main contribution of this paper is to put forward a new convex optimization model for Retinex. Different from existing methods, the main idea is to rewrite a multiplicative form such that the illumination variable and the reflection variable are decoupled in spatial domain. The resulting objective function involves three terms including the Tikhonov regularization of the illumination component, the total variation regularization of the reciprocal of the reflection component, and the data-fitting term among the input image, the illumination component, and the reciprocal of the reflection component. We develop an alternating direction method of multipliers (ADMM to solve the convex optimization model. Numerical experiments demonstrate the advantages of the proposed model which can decompose an image into the illumination and the reflection components.
Convexity and the Euclidean Metric of Space-Time
Directory of Open Access Journals (Sweden)
Nikolaos Kalogeropoulos
2017-02-01
Full Text Available We address the reasons why the “Wick-rotated”, positive-definite, space-time metric obeys the Pythagorean theorem. An answer is proposed based on the convexity and smoothness properties of the functional spaces purporting to provide the kinematic framework of approaches to quantum gravity. We employ moduli of convexity and smoothness which are eventually extremized by Hilbert spaces. We point out the potential physical significance that functional analytical dualities play in this framework. Following the spirit of the variational principles employed in classical and quantum Physics, such Hilbert spaces dominate in a generalized functional integral approach. The metric of space-time is induced by the inner product of such Hilbert spaces.
On the stretch factor of convex Delaunay graphs
Directory of Open Access Journals (Sweden)
Prosenjit Bose
2010-06-01
Full Text Available Let C be a compact and convex set in the plane that contains the origin in its interior, and let S be a finite set of points in the plane. The Delaunay graph DGC(S of S is defined to be the dual of the Voronoi diagram of S with respect to the convex distance function defined by C. We prove that DGC(S is a t-spanner for S, for some constant t that depends only on the shape of the set C. Thus, for any two points p and q in S, the graph DGC(S contains a path between p and q whose Euclidean length is at most t times the Euclidean distance between p and q.
A Survey on Operator Monotonicity, Operator Convexity, and Operator Means
Directory of Open Access Journals (Sweden)
Pattrawut Chansangiam
2015-01-01
Full Text Available This paper is an expository devoted to an important class of real-valued functions introduced by Löwner, namely, operator monotone functions. This concept is closely related to operator convex/concave functions. Various characterizations for such functions are given from the viewpoint of differential analysis in terms of matrix of divided differences. From the viewpoint of operator inequalities, various characterizations and the relationship between operator monotonicity and operator convexity are given by Hansen and Pedersen. In the viewpoint of measure theory, operator monotone functions on the nonnegative reals admit meaningful integral representations with respect to Borel measures on the unit interval. Furthermore, Kubo-Ando theory asserts the correspondence between operator monotone functions and operator means.
Convex variational problems linear, nearly linear and anisotropic growth conditions
Bildhauer, Michael
2003-01-01
The author emphasizes a non-uniform ellipticity condition as the main approach to regularity theory for solutions of convex variational problems with different types of non-standard growth conditions. This volume first focuses on elliptic variational problems with linear growth conditions. Here the notion of a "solution" is not obvious and the point of view has to be changed several times in order to get some deeper insight. Then the smoothness properties of solutions to convex anisotropic variational problems with superlinear growth are studied. In spite of the fundamental differences, a non-uniform ellipticity condition serves as the main tool towards a unified view of the regularity theory for both kinds of problems.
Moduli spaces of convex projective structures on surfaces
DEFF Research Database (Denmark)
Fock, V. V.; Goncharov, A. B.
2007-01-01
We introduce explicit parametrisations of the moduli space of convex projective structures on surfaces, and show that the latter moduli space is identified with the higher Teichmüller space for defined in [V.V. Fock, A.B. Goncharov, Moduli spaces of local systems and higher Teichmüller theory, math.......AG/0311149]. We investigate the cluster structure of this moduli space, and define its quantum version....
Constrained convex minimization via model-based excessive gap
Tran Dinh, Quoc; Cevher, Volkan
2014-01-01
We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.
Free locally convex spaces with a small base
Czech Academy of Sciences Publication Activity Database
Gabriyelyan, S.; Kąkol, Jerzy
2017-01-01
Roč. 111, č. 2 (2017), s. 575-585 ISSN 1578-7303 R&D Projects: GA ČR GF16-34860L Institutional support: RVO:67985840 Keywords : compact resolution * free locally convex space * G-base Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.690, year: 2016 http://link.springer.com/article/10.1007%2Fs13398-016-0315-1
A formulation of combinatorial auction via reverse convex programming
Directory of Open Access Journals (Sweden)
Henry Schellhorn
2005-01-01
of this problem, where orders are aggregated and integrality constraints are relaxed. It was proved that this problem could be solved efficiently in two steps by calculating two fixed points, first the fixed point of a contraction mapping, and then of a set-valued function. In this paper, we generalize the problem to incorporate constraints on maximum price changes between two auction rounds. This generalized problem cannot be solved by the aforementioned methods and necessitates reverse convex programming techniques.
Some fixed point theorems on non-convex sets
Directory of Open Access Journals (Sweden)
Mohanasundaram Radhakrishnan
2017-10-01
Full Text Available In this paper, we prove that if $K$ is a nonempty weakly compact set in a Banach space $X$, $T:K\\to K$ is a nonexpansive map satisfying $\\frac{x+Tx}{2}\\in K$ for all $x\\in K$ and if $X$ is $3-$uniformly convex or $X$ has the Opial property, then $T$ has a fixed point in $K.$
PENNON: A code for convex nonlinear and semidefinite programming
Czech Academy of Sciences Publication Activity Database
Kočvara, Michal; Stingl, M.
2003-01-01
Roč. 18, č. 3 (2003), s. 317-333 ISSN 1055-6788 R&D Projects: GA ČR GA201/00/0080 Grant - others:BMBF(DE) 03ZOM3ER Institutional research plan: CEZ:AV0Z1075907 Keywords : convex programming * semidefinite programming * large-scale problems Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.306, year: 2003
Convex Clustering: An Attractive Alternative to Hierarchical Clustering
Chen, Gary K.; Chi, Eric C.; Ranola, John Michael O.; Lange, Kenneth
2015-01-01
The primary goal in cluster analysis is to discover natural groupings of objects. The field of cluster analysis is crowded with diverse methods that make special assumptions about data and address different scientific aims. Despite its shortcomings in accuracy, hierarchical clustering is the dominant clustering method in bioinformatics. Biologists find the trees constructed by hierarchical clustering visually appealing and in tune with their evolutionary perspective. Hierarchical clustering operates on multiple scales simultaneously. This is essential, for instance, in transcriptome data, where one may be interested in making qualitative inferences about how lower-order relationships like gene modules lead to higher-order relationships like pathways or biological processes. The recently developed method of convex clustering preserves the visual appeal of hierarchical clustering while ameliorating its propensity to make false inferences in the presence of outliers and noise. The solution paths generated by convex clustering reveal relationships between clusters that are hidden by static methods such as k-means clustering. The current paper derives and tests a novel proximal distance algorithm for minimizing the objective function of convex clustering. The algorithm separates parameters, accommodates missing data, and supports prior information on relationships. Our program CONVEXCLUSTER incorporating the algorithm is implemented on ATI and nVidia graphics processing units (GPUs) for maximal speed. Several biological examples illustrate the strengths of convex clustering and the ability of the proximal distance algorithm to handle high-dimensional problems. CONVEXCLUSTER can be freely downloaded from the UCLA Human Genetics web site at http://www.genetics.ucla.edu/software/ PMID:25965340
Numerical modeling of isothermal compositional grading by convex splitting methods
Li, Yiteng
2017-04-09
In this paper, an isothermal compositional grading process is simulated based on convex splitting methods with the Peng-Robinson equation of state. We first present a new form of gravity/chemical equilibrium condition by minimizing the total energy which consists of Helmholtz free energy and gravitational potential energy, and incorporating Lagrange multipliers for mass conservation. The time-independent equilibrium equations are transformed into a system of transient equations as our solution strategy. It is proved our time-marching scheme is unconditionally energy stable by the semi-implicit convex splitting method in which the convex part of Helmholtz free energy and its derivative are treated implicitly and the concave parts are treated explicitly. With relaxation factor controlling Newton iteration, our method is able to converge to a solution with satisfactory accuracy if a good initial estimate of mole compositions is provided. More importantly, it helps us automatically split the unstable single phase into two phases, determine the existence of gas-oil contact (GOC) and locate its position if GOC does exist. A number of numerical examples are presented to show the performance of our method.
Speech Enhancement by Modified Convex Combination of Fractional Adaptive Filtering
Directory of Open Access Journals (Sweden)
M. Geravanchizadeh
2014-12-01
Full Text Available This paper presents new adaptive filtering techniques used in speech enhancement system. Adaptive filtering schemes are subjected to different trade-offs regarding their steady-state misadjustment, speed of convergence, and tracking performance. Fractional Least-Mean-Square (FLMS is a new adaptive algorithm which has better performance than the conventional LMS algorithm. Normalization of LMS leads to better performance of adaptive filter. Furthermore, convex combination of two adaptive filters improves its performance. In this paper, new convex combinational adaptive filtering methods in the framework of speech enhancement system are proposed. The proposed methods utilize the idea of normalization and fractional derivative, both in the design of different convex mixing strategies and their related component filters. To assess our proposed methods, simulation results of different LMS-based algorithms based on their convergence behavior (i.e., MSE plots and different objective and subjective criteria are compared. The objective and subjective evaluations include examining the results of SNR improvement, PESQ test, and listening tests for dual-channel speech enhancement. The powerful aspects of proposed methods are their low complexity, as expected with all LMS-based methods, along with a high convergence rate.
Measures of symmetry for convex sets and stability
Toth, Gabor
2015-01-01
This textbook treats two important and related matters in convex geometry: the quantification of symmetry of a convex set—measures of symmetry—and the degree to which convex sets that nearly minimize such measures of symmetry are themselves nearly symmetric—the phenomenon of stability. By gathering the subject’s core ideas and highlights around Grünbaum’s general notion of measure of symmetry, it paints a coherent picture of the subject, and guides the reader from the basics to the state-of-the-art. The exposition takes various paths to results in order to develop the reader’s grasp of the unity of ideas, while interspersed remarks enrich the material with a behind-the-scenes view of corollaries and logical connections, alternative proofs, and allied results from the literature. Numerous illustrations elucidate definitions and key constructions, and over 70 exercises—with hints and references for the more difficult ones—test and sharpen the reader’s comprehension. The presentation includes:...
Measurement system for diffraction efficiency of convex gratings
Liu, Peng; Chen, Xin-hua; Zhou, Jian-kang; Zhao, Zhi-cheng; Liu, Quan; Luo, Chao; Wang, Xiao-feng; Tang, Min-xue; Shen, Wei-min
2017-08-01
A measurement system for diffraction efficiency of convex gratings is designed. The measurement system mainly includes four components as a light source, a front system, a dispersing system that contains a convex grating, and a detector. Based on the definition and measuring principle of diffraction efficiency, the optical scheme of the measurement system is analyzed and the design result is given. Then, in order to validate the feasibility of the designed system, the measurement system is set up and the diffraction efficiency of a convex grating with the aperture of 35 mm, the curvature-radius of 72mm, the blazed angle of 6.4°, the grating period of 2.5μm and the working waveband of 400nm-900nm is tested. Based on GUM (Guide to the Expression of Uncertainty in Measurement), the uncertainties in the measuring results are evaluated. The measured diffraction efficiency data are compared to the theoretical ones, which are calculated based on the grating groove parameters got by an atomic force microscope and Rigorous Couple Wave Analysis, and the reliability of the measurement system is illustrated. Finally, the measurement performance of the system is analyzed and tested. The results show that, the testing accuracy, the testing stability and the testing repeatability are 2.5%, 0.085% and 3.5% , respectively.
Phono-spectrographic analysis of heart murmur in children
Directory of Open Access Journals (Sweden)
Angerla Anna
2007-06-01
Full Text Available Abstract Background More than 90% of heart murmurs in children are innocent. Frequently the skills of the first examiner are not adequate to differentiate between innocent and pathological murmurs. Our goal was to evaluate the value of a simple and low-cost phonocardiographic recording and analysis system in determining the characteristic features of heart murmurs in children and in distinguishing innocent systolic murmurs from pathological. Methods The system consisting of an electronic stethoscope and a multimedia laptop computer was used for the recording, monitoring and analysis of auscultation findings. The recorded sounds were examined graphically and numerically using combined phono-spectrograms. The data consisted of heart sound recordings from 807 pediatric patients, including 88 normal cases without any murmur, 447 innocent murmurs and 272 pathological murmurs. The phono-spectrographic features of heart murmurs were examined visually and numerically. From this database, 50 innocent vibratory murmurs, 25 innocent ejection murmurs and 50 easily confusable, mildly pathological systolic murmurs were selected to test whether quantitative phono-spectrographic analysis could be used as an accurate screening tool for systolic heart murmurs in children. Results The phono-spectrograms of the most common innocent and pathological murmurs were presented as examples of the whole data set. Typically, innocent murmurs had lower frequencies (below 200 Hz and a frequency spectrum with a more harmonic structure than pathological cases. Quantitative analysis revealed no significant differences in the duration of S1 and S2 or loudness of systolic murmurs between the pathological and physiological systolic murmurs. However, the pathological murmurs included both lower and higher frequencies than the physiological ones (p Conclusion Phono-spectrographic analysis improves the accuracy of primary heart murmur evaluation and educates inexperienced listener
Sensitivity Calibration of Far-Ultraviolet Imaging Spectrograph
Directory of Open Access Journals (Sweden)
I. -J. Kim
2004-12-01
Full Text Available We describe the in-flight sensitivity calibration of the Far ultraviolet Imaging Spectrograph (FIMS, also known as SPEAR onboard the first Korean science satellite, STSAT-1, which was launched in September 2003. The sensitivity calibration is based on a comparison of the FIMS observations of the hot white dwarf G191B2B, and two O-type stars Alpha-Cam, HD93521 with the HUT (Hopkins Ultraviolet Telescope observations. The FIMS observations for the calibration targets have been conducted from November 2003 through May 2004. The effective areas calculated from the targets are compared with each other.
Spectrographic determination of niobium in uranium - niobium alloys
International Nuclear Information System (INIS)
Charbel, M.Y.; Lordello, A.R.
1984-01-01
A method for the spectrographic determination of niobium in uranium-niobium alloys in the concentration range 1-10% has been developed. The metallic sample is converted to oxide by calcination in a muffle furnace at 800 0 C for two hours. The standards are prepared synthetically by dry-mixing. One part of the sample or standard is added to nineteen parts of graphite powder and the mixture is excited in a DC arc. Hafnium has been used as internal standard. The precision of the method is + - 4.8%. (Author) [pt
Quantitative spectrographic determination of traces of manganese in ferric oxide
International Nuclear Information System (INIS)
Capdevila, C.; Roca, M.
1968-01-01
In order to enhance the sensitivity, different electrode types and sweeping substances have been studied. Graphite anodes, with 5 x 2,5, 4 x 4,5, 4 x 8 and 7 x 10 mm crater, as well as CuF 2 , AgCl, ZnO and graphite powder as sweeping materials, have been tested. A JACO-Ebert grating spectrograph and 10 amp. d.c. arc have been employed, choosing the proper exposure times from moving-plate studies. Using 4 x 4,5 mm electrodes and 75% AgCl a detection limit of 0,2 ppm is attainable. (Author) 7 refs
Spectrographic determination of impurities in ammonium hydrogen fluoride samples
International Nuclear Information System (INIS)
Roca, M.; Capdevila, C.; Alduan, F.A.
1976-01-01
The quantitative spectrographic trace determination of Al, B, Cr, Cu, Fe, Mn, Mo, Ni, Pb and Si in ammonium hydrogen fluoride samples is considered. 10 A dc arc excitation and graphite electrodes with crate either 4.5 mm or 8 mm deep are employed. A comparison of various matrices such as graphite, gallium oxide, germanium oxide, magnesium oxide and zinc oxide, in the ratios 1:1 and 1:3, as well as a mixture 50% graphite - 50% zinc oxide in the ratio 1:1 is included. Zinc oxide in the ratio 1:1 and 4x8 mm craters show the best over-all results. (author)
Study of airborne particles by emission spectrographic method
Energy Technology Data Exchange (ETDEWEB)
Chao, C N; Lee, S L; Tsai, H T; Wu, S C
1975-03-01
A rapid spectrographic method was developed to analyze cadmium, lead, nickel, zinc, tin, titanium, and vanadium collected in glass fiber air filters. A direct excitation method is used for volatile elements, while graphite powder is added for determining involatile elements, such as Ti and V in a dc arc source. Limits of detection for analyzed elements are between 0.01-0.1 micrograms. This simple and sensitive method was used to analyze samples from 15 air sampling stations in different areas of Taiwan.
Spectrographic determination of impurities in enriched uranium solutions
International Nuclear Information System (INIS)
Capdevila, C.; Roca, M.
1980-01-01
A spectrographic procedure for the determination of trace amounts of Al, B, Ba, Be, Bi, Ca, Cd, Co, Cr, Cu, Fe, K, L i , Hg, Mn, Mo, Na, Nb, Ni, P, Pb, Ru, Sb, Sn, Sr, Ti, V, Zn, and Zr in enriched uranyl nitrate solutions from the reprocessing of spent nuclear fuels is described. After removal of uranium by either TBP or TNOA solvent extraction, the aqueous phase Is analysed by the graphite spark technique. TBP is adequate for all impurities, excepting boron and phosphorus; both of these elements can sat is factory be determined by using TNOA after the addition of mannitol to avoid boron losses. (Author) 4 refs
Spectrographic study of neodymium complexing with ATP and ADP
International Nuclear Information System (INIS)
Svetlova, I.E.; Dobrynina, N.A.; Martynenko, L.N.
1989-01-01
By spectrographic method neodymium complexing with ATP and ADP in aqueous solutions at different pH values has been studied. The composition of the complexes was determined by the method of isomolar series. On the basis of analysis of absorption spectra it has been ascertained that at equimolar ratio of Nd 3+ and ATP absorption band of L278A corresponds to monocomplex, and the band of 4290 A - to biscomplex. For the complexes with ADP the absorption band of 4288 A is referred to bicomplexes. The character of ATP and ADP coordination by Nd 3+ ion is considered. Stability constants of the complexes are calculated
A UV prime focus spectrograph for the CFHT
International Nuclear Information System (INIS)
Boulade, O.; Vigroux, L.
1986-03-01
The UV prime spectrograph at the Canada-France-Hawaii Telescope is the first instrument to be designed with an aspherized diffraction grating. This technique leads to all reflective Schmidt designs with a very small amount of optical surface on fast aperture ratio. A thin backside illuminated RCA CCD is now used as the detector. Since the detector is at the focus of an f/1 mounting, within the optical path, a minicryostat (5 cm x 5 cm x 3 cm) was designed to minimize the central obscuration. This paper describes this new instrument and its performances
The spectrographic analysis of inorganic impurities in heavy water
International Nuclear Information System (INIS)
Artaud, J.; Normand, J.; Vie, R.
1961-01-01
Inorganic impurities in heavy water are determined by two spectrographic methods. First is described the copper-spark method which is sensitive and directly applicable, and is particular useful because of the absence of a support. Secondly the graphite impregnation method is given; this is used when the first method is not applicable (determination of copper) and for the alkali metals. For the usual elements, the sensitivity of the copper spark method is of the order of 0,1 μg/ml whereas for the graphite impregnation method the sensitivity is only 0,3 μg/ml. (author) [fr
Exoplanets search and characterization with the SOPHIE spectrograph at OHP
Directory of Open Access Journals (Sweden)
Hébrard G.
2011-02-01
Full Text Available Several programs of exoplanets search and characterization have been started with SOPHIE at the 1.93-m telescope of Haute-Provence Observatory, France. SOPHIE is an environmentally stabilized echelle spectrograph dedicated to high-precision radial velocity measurements. The objectives of these programs include systematic searches for exoplanets around diﬀerent types of stars, characterizations of planet-host stars, studies of transiting planets through RossiterMcLaughlin eﬀect, follow-up observations of photometric surveys. The instrument SOPHIE and a review of its latest results are presented here.
Spectrographic mask for digital registration of bright source spectra
Directory of Open Access Journals (Sweden)
Ademir Xavier
2017-08-01
Full Text Available In this work we present schematic diagrams for the construction of a spectrographic mask attachable to a camera objective in order to capture spectra using simple CD or DVD gratings. The mask is made of two parts: an adapter ring and elbow-shaped blockage for suitable registration of spectra in the lab and outdoors. By using a free software, we analyze and discuss the calibration of the wavelength scale of the solar spectrum, which allows us to identify many chemical elements in it. In the conclusion, we further discuss some interesting projects to be carried out by students using the idea.
Rapid spectrographic method for determining microcomponents in solutions
International Nuclear Information System (INIS)
Karpenko, L.I.; Fadeeva, L.A.; Gordeeva, A.N.; Ermakova, N.V.
1984-01-01
Rapid spectrographic method foe determining microcomponents (Cd, V, Mo, Ni, rare earths and other elements) in industrial and natural solutions has been developed. The analyses were conducted in argon medium and in the air. Calibration charts for determining individual rare earths in solutions are presented. The accuracy of analysis (Sr) was detection limit was 10 -3 -10 -4 mg/ml, that for rare earths - 1.10 -2 mg/ml. The developed method enables to rapidly analyze solutions (sewages and industrialllwaters, wine products) for 20 elements including 6 rare earths, using strandard equipment
Design of a simple magnetic spectrograph for the Karlsruhe isochronous cyclotron
International Nuclear Information System (INIS)
Gils, H.J.
1980-12-01
The ion-optical design of a simple magnetic spectrograph for studies of nuclear reactions on the Karlsruhe cyclotron is described. The spectrograph allows to determine the nuclear charge, the mass number, the reaction angle and the impulse (energy) of charged particles, which are emitted from the target. The spectrographs possibilities cover an appropriate range of likely nuclear reactions which are induced by light and heavy particles up to mass number A=20 and energies of 26 MeV per nucleon [de
The Use of Color Sensors for Spectrographic Calibration
Thomas, Neil B.
2018-04-01
The wavelength calibration of spectrographs is an essential but challenging task in many disciplines. Calibration is traditionally accomplished by imaging the spectrum of a light source containing features that are known to appear at certain wavelengths and mapping them to their location on the sensor. This is typically required in conjunction with each scientific observation to account for mechanical and optical variations of the instrument over time, which may span years for certain projects. The method presented here investigates the usage of color itself instead of spectral features to calibrate a spectrograph. The primary advantage of such a calibration is that any broad-spectrum light source such as the sky or an incandescent bulb is suitable. This method allows for calibration using the full optical pathway of the instrument instead of incorporating separate calibration equipment that may introduce errors. This paper focuses on the potential for color calibration in the field of radial velocity astronomy, in which instruments must be finely calibrated for long periods of time to detect tiny Doppler wavelength shifts. This method is not restricted to radial velocity, however, and may find application in any field requiring calibrated spectrometers such as sea water analysis, cellular biology, chemistry, atmospheric studies, and so on. This paper demonstrates that color sensors have the potential to provide calibration with greatly reduced complexity.
MSE spectrograph optical design: a novel pupil slicing technique
Spanò, P.
2014-07-01
The Maunakea Spectroscopic Explorer shall be mainly devoted to perform deep, wide-field, spectroscopic surveys at spectral resolutions from ~2000 to ~20000, at visible and near-infrared wavelengths. Simultaneous spectral coverage at low resolution is required, while at high resolution only selected windows can be covered. Moreover, very high multiplexing (3200 objects) must be obtained at low resolution. At higher resolutions a decreased number of objects (~800) can be observed. To meet such high demanding requirements, a fiber-fed multi-object spectrograph concept has been designed by pupil-slicing the collimated beam, followed by multiple dispersive and camera optics. Different resolution modes are obtained by introducing anamorphic lenslets in front of the fiber arrays. The spectrograph is able to switch between three resolution modes (2000, 6500, 20000) by removing the anamorphic lenses and exchanging gratings. Camera lenses are fixed in place to increase stability. To enhance throughput, VPH first-order gratings has been preferred over echelle gratings. Moreover, throughput is kept high over all wavelength ranges by splitting light into more arms by dichroic beamsplitters and optimizing efficiency for each channel by proper selection of glass materials, coatings, and grating parameters.
Initial results from the fast imaging solar spectrograph (FISS)
2015-01-01
This collection of papers describes the instrument and initial results obtained from the Fast Imaging Solar Spectrograph (FISS), one of the post-focus instruments of the 1.6 meter New Solar Telescope at the Big Bear Solar Observatory. The FISS primarily aims at investigating structures and dynamics of chromospheric features. This instrument is a dual-band Echelle spectrograph optimized for the simultaneous recording of the H I 656.3 nm band and the Ca II 854.2 nm band. The imaging is done with the fast raster scan realized by the linear motion of a two-mirror scanner, and its quality is determined by the performance of the adaptive optics of the telescope. These papers illustrate the capability of the early FISS observations in the study of chromospheric features. Since the imaging quality has been improved a lot with the advance of the adaptive optics, one can obtain much better data with the current FISS observations. This volume is aimed at graduate students and researchers working in...
The role of convexity in perceptual completion: beyond good continuation.
Liu, Z; Jacobs, D W; Basri, R
1999-01-01
Since the seminal work of the Gestalt psychologists, there has been great interest in understanding what factors determine the perceptual organization of images. While the Gestaltists demonstrated the significance of grouping cues such as similarity, proximity and good continuation, it has not been well understood whether their catalog of grouping cues is complete--in part due to the paucity of effective methodologies for examining the significance of various grouping cues. We describe a novel, objective method to study perceptual grouping of planar regions separated by an occluder. We demonstrate that the stronger the grouping between two such regions, the harder it will be to resolve their relative stereoscopic depth. We use this new method to call into question many existing theories of perceptual completion (Ullman, S. (1976). Biological Cybernetics, 25, 1-6; Shashua, A., & Ullman, S. (1988). 2nd International Conference on Computer Vision (pp. 321-327); Parent, P., & Zucker, S. (1989). IEEE Transactions on Pattern Analysis and Machine Intelligence, 11, 823-839; Kellman, P. J., & Shipley, T. F. (1991). Cognitive psychology, Liveright, New York; Heitger, R., & von der Heydt, R. (1993). A computational model of neural contour processing, figure-ground segregation and illusory contours. In Internal Conference Computer Vision (pp. 32-40); Mumford, D. (1994). Algebraic geometry and its applications, Springer, New York; Williams, L. R., & Jacobs, D. W. (1997). Neural Computation, 9, 837-858) that are based on Gestalt grouping cues by demonstrating that convexity plays a strong role in perceptual completion. In some cases convexity dominates the effects of the well known Gestalt cue of good continuation. While convexity has been known to play a role in figure/ground segmentation (Rubin, 1927; Kanizsa & Gerbino, 1976), this is the first demonstration of its importance in perceptual completion.
A fast new cadioptric design for fiber-fed spectrographs
Saunders, Will
2012-09-01
The next generation of massively multiplexed multi-object spectrographs (DESpec, SUMIRE, BigBOSS, 4MOST, HECTOR) demand fast, efficient and affordable spectrographs, with higher resolutions (R = 3000-5000) than current designs. Beam-size is a (relatively) free parameter in the design, but the properties of VPH gratings are such that, for fixed resolution and wavelength coverage, the effect on beam-size on overall VPH efficiency is very small. For alltransmissive cameras, this suggests modest beam-sizes (say 80-150mm) to minimize costs; while for cadioptric (Schmidt-type) cameras, much larger beam-sizes (say 250mm+) are preferred to improve image quality and to minimize obstruction losses. Schmidt designs have benefits in terms of image quality, camera speed and scattered light performance, and recent advances such as MRF technology mean that the required aspherics are no longer a prohibitive cost or risk. The main objections to traditional Schmidt designs are the inaccessibility of the detector package, and the loss in throughput caused by it being in the beam. With expected count rates and current read-noise technology, the gain in camera speed allowed by Schmidt optics largely compensates for the additional obstruction losses. However, future advances in readout technology may erase most of this compensation. A new Schmidt/Maksutov-derived design is presented, which differs from previous designs in having the detector package outside the camera, and adjacent to the spectrograph pupil. The telescope pupil already contains a hole at its center, because of the obstruction from the telescope top-end. With a 250mm beam, it is possible to largely hide a 6cm × 6cm detector package and its dewar within this hole. This means that the design achieves a very high efficiency, competitive with transmissive designs. The optics are excellent, as least as good as classic Schmidt designs, allowing F/1.25 or even faster cameras. The principal hardware has been costed at $300K per
Blaschke- and Minkowski-endomorphisms of convex bodies
DEFF Research Database (Denmark)
Kiderlen, Markus
2006-01-01
We consider maps of the family of convex bodies in Euclidean d-dimensional space into itself that are compatible with certain structures on this family: A Minkowski-endomorphism is a continuous, Minkowski-additive map that commutes with rotations. For d>2, a representation theorem for such maps......-endomorphisms, where additivity is now understood with respect to Blaschke-addition. Using a special mixed volume, an adjoining operator can be introduced. This operator allows one to identify the class of Blaschke-endomorphisms with the class of weakly monotonic, non-degenerate and translation-covariant Minkowski...
Convex models and probabilistic approach of nonlinear fatigue failure
International Nuclear Information System (INIS)
Qiu Zhiping; Lin Qiang; Wang Xiaojun
2008-01-01
This paper is concerned with the nonlinear fatigue failure problem with uncertainties in the structural systems. In the present study, in order to solve the nonlinear problem by convex models, the theory of ellipsoidal algebra with the help of the thought of interval analysis is applied. In terms of the inclusion monotonic property of ellipsoidal functions, the nonlinear fatigue failure problem with uncertainties can be solved. A numerical example of 25-bar truss structures is given to illustrate the efficiency of the presented method in comparison with the probabilistic approach
Generalized minimizers of convex integral functionals, Bregman distance, Pythagorean identities
Czech Academy of Sciences Publication Activity Database
Imre, C.; Matúš, František
2012-01-01
Roč. 48, č. 4 (2012), s. 637-689 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539; GA ČR GAP202/10/0618 Institutional support: RVO:67985556 Keywords : maximum entropy * moment constraint * generalized primal/dual solutions * normal integrand * convex duality * Bregman projection * inference principles Subject RIV: BA - General Mathematics Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/MTR/matus-0381750.pdf
Iterative Schemes for Convex Minimization Problems with Constraints
Directory of Open Access Journals (Sweden)
Lu-Chuan Ceng
2014-01-01
Full Text Available We first introduce and analyze one implicit iterative algorithm for finding a solution of the minimization problem for a convex and continuously Fréchet differentiable functional, with constraints of several problems: the generalized mixed equilibrium problem, the system of generalized equilibrium problems, and finitely many variational inclusions in a real Hilbert space. We prove strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another implicit iterative algorithm for finding a fixed point of infinitely many nonexpansive mappings with the same constraints, and derive its strong convergence under mild assumptions.
Gröbner bases and convex polytopes
Sturmfels, Bernd
1995-01-01
This book is about the interplay of computational commutative algebra and the theory of convex polytopes. It centers around a special class of ideals in a polynomial ring: the class of toric ideals. They are characterized as those prime ideals that are generated by monomial differences or as the defining ideals of toric varieties (not necessarily normal). The interdisciplinary nature of the study of Gröbner bases is reflected by the specific applications appearing in this book. These applications lie in the domains of integer programming and computational statistics. The mathematical tools presented in the volume are drawn from commutative algebra, combinatorics, and polyhedral geometry.
On the structure of self-affine convex bodies
Energy Technology Data Exchange (ETDEWEB)
Voynov, A S [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)
2013-08-31
We study the structure of convex bodies in R{sup d} that can be represented as a union of their affine images with no common interior points. Such bodies are called self-affine. Vallet's conjecture on the structure of self-affine bodies was proved for d = 2 by Richter in 2011. In the present paper we disprove the conjecture for all d≥3 and derive a detailed description of self-affine bodies in R{sup 3}. Also we consider the relation between properties of self-affine bodies and functional equations with a contraction of an argument. Bibliography: 10 titles.
Use of Convexity in Ostomy Care: Results of an International Consensus Meeting.
Hoeflok, Jo; Salvadalena, Ginger; Pridham, Sue; Droste, Werner; McNichol, Laurie; Gray, Mikel
Ostomy skin barriers that incorporate a convexity feature have been available in the marketplace for decades, but limited resources are available to guide clinicians in selection and use of convex products. Given the widespread use of convexity, and the need to provide practical guidelines for appropriate use of pouching systems with convex features, an international consensus panel was convened to provide consensus-based guidance for this aspect of ostomy practice. Panelists were provided with a summary of relevant literature in advance of the meeting; these articles were used to generate and reach consensus on 26 statements during a 1-day meeting. Consensus was achieved when 80% of panelists agreed on a statement using an anonymous electronic response system. The 26 statements provide guidance for convex product characteristics, patient assessment, convexity use, and outcomes.
Spectrographic analysis of metallic silicium and natural quartz
International Nuclear Information System (INIS)
Grigoletto, T.; Lordello, A.R.
1985-01-01
A method has been developed for the spectrographic determination of B, Mg, Al, Ca, Ti, Mn, Fe, Ni, Cu and Ag in silicon metal and other for Al, Ca, Mg, Ti, Cr, Mn, and Fe in natural quartz. A mixture of the matrix with a proper buffer is excited directly in a dc-arc. High-current (25A) and argon atmosphere are used for both the methods. Silicon metal is blended with 8% NaF and after 1:1 (w/w) with graphite. For natural quartz 20% NaF and 30% graphite by weight is the buffer mixture employed. The lower values in the determinations varies from 0.5 to 40 μg/g and the precision of the analysis from 7% to 45%. (Author) [pt
Spectrographic analysis of waste waters; Analisis espectrografico de aguas residuales
Energy Technology Data Exchange (ETDEWEB)
Alvarez Alduan, F; Capdevila, C
1979-07-01
The Influence of sodium and calcium, up to a maximum concentration of 1000 mg/1 Na and 300 mg/1 Ca, in the spectrographic determination of Cr, Cu, Fe,Mn and Pb in waste waters using graphite spark excitation has been studied. In order to eliminate this influence, each of the elements Ba, Cs, In, La, Li, Sr and Ti, as well as a mixture containing 5% Li-50% Ti, have been tested as spectrochemical buffers. This mixture allows to obtain an accuracy better than 25%. Sodium and calcium enhance the line intensities of impurities, when using graphite or gold electrodes, but they produce an opposite effect if copper or silver electrodes are used. (Author) 1 refs.
Spectrographic determination of traces of boron in steels
International Nuclear Information System (INIS)
Alduan, F.A.; Roca, M.
1976-01-01
A spectrographic method has been developed to determine quantitatively boron in steels in the 0.5 to 250 ppm concentration range. The samples are dissolved in acids and transformed into oxides, avoiding boron losses by the addition of mannitol. For the fluoride evolution of boron in the dc arc the following compounds have been considered: CuF 2 , LiF, NaF, and SrF 2 . CuF 2 , at a concentration of 10%, provides the highest line-to-background intensity ratio. An arc current of 5 amperes eliminates the interference from iron spectrum on the most sensitive boron line - B 2497.7 A. Variations in chromium and nickel contents have no effect on the analytical results. (author)
Cosmic Origins Spectrograph: On-Orbit Performance of Target Acquisitions
Penton, Steven V.
2010-07-01
COS is a slit-less spectrograph with a very small aperture (R=1.2500). To achieve the desired wavelength accuracies, HST+COS must center the target to within 0.100 of the center of the aperture for the FUV channel, and 0.0400 for NUV. During SMOV and early Cycle 17 we fine-tuned the COS target acquisition (TA) procedures to exceed this accuracy for all three COS TA modes; NUV imaging, NUV spectroscopic, and FUV spectroscopic. In Cycle 17, we also adjusted the COSto- FGS offsets in the SIAF file. This allows us to recommend skipping the time consuming ACQ/SEARCH in cases where the target coordinates are well known. Here we will compare the on-orbit performance of all COS TA modes in terms of centering accuracy, efficiency, and required signal-to-noise (S/N).
The vacuum system of the Karlsruhe magnetic spectrograph 'Little John'
International Nuclear Information System (INIS)
Buschmann, J.; Gils, H.J.; Jelitto, H.; Krisch, J.; Ludwig, G.; Manger, D.; Rebel, H.; Seith, W.; Zagromski, S.
1985-02-01
The vacuum equipment of the magnetic spectrograph Little John is described. The system is characterized by the following special features: The sliding exit flange of the target chamber can be moved to the desired angle of observation without affecting the high vacuum. The pressure maintained is less by a factor of ten than the pressure in the incoming beam tubing. The vacuum system is divided into several separate pumping sections. Ground loops are strictly avoided. All actual states of relevance are fed back to the control panels. The vacuum installation is protected by hardware interlocking systems as well as by a real time program written in FORTRAN in cooperation with CAMAC interfacing. (orig.) [de
Status and Performance Updates for the Cosmic Origins Spectrograph
Snyder, Elaine M.; De Rosa, Gisella; Fischer, William J.; Fix, Mees; Fox, Andrew; Indriolo, Nick; James, Bethan; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Rafelski, Marc; Roman-Duval, Julia; Sahnow, David J.; Sankrit, Ravi; Taylor, Joanna M.; White, James
2018-01-01
The Hubble Space Telescope's Cosmic Origins Spectrograph (COS) moved the spectra on the FUV detector from Lifetime Position 3 (LP3) to a new pristine location, LP4, in October 2017. The spectra were shifted in the cross-dispersion direction by -2.5" (roughly -31 pixels) from LP3, or -5" (roughly -62 pixels) from the original LP1. This move mitigates the adverse effects of gain sag on the spectral quality and accuracy of COS FUV observations. Here, we present updates regarding the calibration of FUV data at LP4, including the flat fields, flux calibrations, and spectral resolution. We also present updates on the time-dependent sensitivities and dark rates of both the NUV and FUV detectors.
Determination of rare earth impurities in thorium by spectrographic methods
Energy Technology Data Exchange (ETDEWEB)
Wray, L W
1957-08-15
A method for determining rare earth impurities in thorium in the fractional ppm range is described. Before spectrographic examination is possible, the impurities must be freed from the thorium matrix. This is accomplished by removing the bulk of the thorium by extraction with TBP-CCl{sub 4} and the remainder by extraction with TTA-C{sub 6}H{sub 6}. This results in a consistent recovery of rare earths of about 85% with an average sensitivity of 0.2 ppm. The experimental error is within 10%. Details of the procedure are given together with working curves for the major neutron absorbing rare earths; i.e. dysprosium, europium, gadolinium and samarium. (author)
Optical Design of the far Ultraviolet Imaging Spectrograph
Directory of Open Access Journals (Sweden)
K. S. Ryu
1998-12-01
Full Text Available We present the design specifications and the performance estimation of the FUVS (Far Ultraviolet Spectrograph proposed for the observations of aurora, day/night airglow and astronomical objects on small satelltes in the spectral range of . The design of FUVS is carried out with the full consideration of optical characteristics of the grating and the aspheric substrate. Two independent methods, ray-tracing and the wave front aberration theory, are employed to estimate the performance of the optical design and it is verified that both procedures yield the resolution of in the entire spectral range. MDF (Minimum Detectable Flux is also estimated using the known characteristics of the reflecting material and MCP, to study the feasibility of detection for faint emission lines from the hot interstellar plasmas. The results give that the observations from 1 day to 1 week, depending on the line intensity, can detect such faint emission lines from diffuse interstellar plasmas.
MEGARA: a new generation optical spectrograph for GTC
Gil de Paz, A.; Gallego, J.; Carrasco, E.; Iglesias-Páramo, J.; Cedazo, R.; Vílchez, J. M.; García-Vargas, M. L.; Arrillaga, X.; Carrera, M. A.; Castillo-Morales, A.; Castillo-Domínguez, E.; Eliche-Moral, M. C.; Ferrusca, D.; González-Guardia, E.; Lefort, B.; Maldonado, M.; Marino, R. A.; Martínez-Delgado, I.; Morales Durán, I.; Mujica, E.; Páez, G.; Pascual, S.; Pérez-Calpena, A.; Sánchez-Penim, A.; Sánchez-Blanco, E.; Tulloch, S.; Velázquez, M.; Zamorano, J.; Aguerri, A. L.; Barrado y Naváscues, D.; Bertone, E.; Cardiel, N.; Cava, A.; Cenarro, J.; Chávez, M.; García, M.; Guichard, J.; Gúzman, R.; Herrero, A.; Huélamo, N.; Hughes, D.; Jiménez-Vicente, J.; Kehrig, C.; Márquez, I.; Masegosa, J.; Mayya, Y. D.; Méndez-Abreu, J.; Mollá, M.; Muñoz-Tuñón, C.; Peimbert, M.; Pérez-González, P. G.; Pérez Montero, E.; Rodríguez, M.; Rodríguez-Espinosa, J. M.; Rodríguez-Merino, L.; Rosa-González, D.; Sánchez-Almeida, J.; Sánchez Contreras, C.; Sánchez-Blázquez, P.; Sánchez Moreno, F. M.; Sánchez, S. F.; Sarajedini, A.; Serena, F.; Silich, S.; Simón-Díaz, S.; Tenorio-Tagle, G.; Terlevich, E.; Terlevich, R.; Torres-Peimbert, S.; Trujillo, I.; Tsamis, Y.; Vega, O.; Villar, V.
2014-07-01
MEGARA (Multi-Espectrógrafo en GTC de Alta Resolución para Astronomía) is an optical Integral-Field Unit (IFU) and Multi-Object Spectrograph (MOS) designed for the GTC 10.4m telescope in La Palma. MEGARA offers two IFU fiber bundles, one covering 12.5x11.3 arcsec2 with a spaxel size of 0.62 arcsec (Large Compact Bundle; LCB) and another one covering 8.5x6.7 arcsec2 with a spaxel size of 0.42 arcsec (Small Compact Bundle; SCB). The MEGARA MOS mode will allow observing up to 100 objects in a region of 3.5x3.5 arcmin2 around the two IFU bundles. Both the LCB IFU and MOS capabilities of MEGARA will provide intermediate-to-high spectral resolutions (RFWHM~6,000, 12,000 and 18,700, respectively for the low-, mid- and high-resolution Volume Phase Holographic gratings) in the range 3650-9700ÅÅ. These values become RFWHM~7,000, 13,500, and 21,500 when the SCB is used. A mechanism placed at the pseudo-slit position allows exchanging the three observing modes and also acts as focusing mechanism. The spectrograph is a collimator-camera system that has a total of 11 VPHs simultaneously available (out of the 18 VPHs designed and being built) that are placed in the pupil by means of a wheel and an insertion mechanism. The custom-made cryostat hosts an E2V231-84 4kx4k CCD. The UCM (Spain) leads the MEGARA Consortium that also includes INAOE (Mexico), IAA-CSIC (Spain), and UPM (Spain). MEGARA is being developed under a contract between GRANTECAN and UCM. The detailed design, construction and AIV phases are now funded and the instrument should be delivered to GTC before the end of 2016.
Spectrographic determination of dysprosium dopant in calcium sulphate used as dosimetric material
International Nuclear Information System (INIS)
Grigoletto, T.
1982-01-01
A spectrographic method is described for the quantitative determination of dysprosium in doped crystals of calcium sulphate. The consequences of the changes in some parameters of the excitation conditions, such as arc current, electrode type and total or partial burning of sample, in the analytical results are discussed. Matrix effects are investigated by comparison among analytical curves obtained from three different methods of standard preparations. Variations in the intensity of the spectral lines are verificated by recording the spectrum in distinct photographic plates (SA-1). The role of internal standard in analytical reproducibility and in counterbalance of the variations in the arc current and in the weight of sample are studied. The great similarity in excitation behavior of many of the rare earths is used to provide a high degree of internal standardization. Precision studies show a standard deviation of about + - 2,4 percent by use of lanthanum as an internal standard. Accuracy is estimate by comparative analysis of two calcium sulphate samples by X-Rays Fluorescence, Neutron Activation and Inductive Coupled Plasma (ICP) Emission Spectroscopy. (Author) [pt
Use of an ultra-high resolution magnetic spectrograph for materials research
Boerma, DO; Arnoldbik, WM; Wolfswinkel, W; Balogh, AG; Walter, G
1997-01-01
A brief description is given of a magnetic spectrograph for RBS and ERD analysis with MeV beams, delivered by a Tandem accelerator. With a number of examples of thin layer analysis it is shown that the spectrograph is uniquely suited for the measurement of concentration depth profiles up to a depth
Canonical Primal-Dual Method for Solving Non-convex Minimization Problems
Wu, Changzhi; Li, Chaojie; Gao, David Yang
2012-01-01
A new primal-dual algorithm is presented for solving a class of non-convex minimization problems. This algorithm is based on canonical duality theory such that the original non-convex minimization problem is first reformulated as a convex-concave saddle point optimization problem, which is then solved by a quadratically perturbed primal-dual method. %It is proved that the popular SDP method is indeed a special case of the canonical duality theory. Numerical examples are illustrated. Comparing...
Sequential Change-Point Detection via Online Convex Optimization
Directory of Open Access Journals (Sweden)
Yang Cao
2018-02-01
Full Text Available Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.
A New Interpolation Approach for Linearly Constrained Convex Optimization
Espinoza, Francisco
2012-08-01
In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.
Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon
DEFF Research Database (Denmark)
Fischer, Paul
1997-01-01
This paper investigates the problem where one is given a finite set of n points in the plane each of which is labeled either ?positive? or ?negative?. We consider bounded convex polygons, the vertices of which are positive points and which do not contain any negative point. It is shown how...... such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...... becomes O(M n³ log n). It is also shown how to find a maximum convex polygon which contains a given point in time O(n³ log n). Two parallel algorithms for the basic problem are also presented. The first one runs in time O(n log n) using O(n²) processors, the second one has polylogarithmic time but needs O...
Nash points, Ky Fan inequality and equilibria of abstract economies in Max-Plus and -convexity
Briec, Walter; Horvath, Charles
2008-05-01
-convexity was introduced in [W. Briec, C. Horvath, -convexity, Optimization 53 (2004) 103-127]. Separation and Hahn-Banach like theorems can be found in [G. Adilov, A.M. Rubinov, -convex sets and functions, Numer. Funct. Anal. Optim. 27 (2006) 237-257] and [W. Briec, C.D. Horvath, A. Rubinov, Separation in -convexity, Pacific J. Optim. 1 (2005) 13-30]. We show here that all the basic results related to fixed point theorems are available in -convexity. Ky Fan inequality, existence of Nash equilibria and existence of equilibria for abstract economies are established in the framework of -convexity. Monotone analysis, or analysis on Maslov semimodules [V.N. Kolokoltsov, V.P. Maslov, Idempotent Analysis and Its Applications, Math. Appl., volE 401, Kluwer Academic, 1997; V.P. Litvinov, V.P. Maslov, G.B. Shpitz, Idempotent functional analysis: An algebraic approach, Math. Notes 69 (2001) 696-729; V.P. Maslov, S.N. Samborski (Eds.), Idempotent Analysis, Advances in Soviet Mathematics, Amer. Math. Soc., Providence, RI, 1992], is the natural framework for these results. From this point of view Max-Plus convexity and -convexity are isomorphic Maslov semimodules structures over isomorphic semirings. Therefore all the results of this paper hold in the context of Max-Plus convexity.
Generalized Bregman distances and convergence rates for non-convex regularization methods
International Nuclear Information System (INIS)
Grasmair, Markus
2010-01-01
We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ 1/p holds, if the regularization term has a slightly faster growth at zero than |t| p
Bertamini, Marco; Wagemans, Johan
2013-04-01
Interest in convexity has a long history in vision science. For smooth contours in an image, it is possible to code regions of positive (convex) and negative (concave) curvature, and this provides useful information about solid shape. We review a large body of evidence on the role of this information in perception of shape and in attention. This includes evidence from behavioral, neurophysiological, imaging, and developmental studies. A review is necessary to analyze the evidence on how convexity affects (1) separation between figure and ground, (2) part structure, and (3) attention allocation. Despite some broad agreement on the importance of convexity in these areas, there is a lack of consensus on the interpretation of specific claims--for example, on the contribution of convexity to metric depth and on the automatic directing of attention to convexities or to concavities. The focus is on convexity and concavity along a 2-D contour, not convexity and concavity in 3-D, but the important link between the two is discussed. We conclude that there is good evidence for the role of convexity information in figure-ground organization and in parsing, but other, more specific claims are not (yet) well supported.
First-order Convex Optimization Methods for Signal and Image Processing
DEFF Research Database (Denmark)
Jensen, Tobias Lindstrøm
2012-01-01
In this thesis we investigate the use of first-order convex optimization methods applied to problems in signal and image processing. First we make a general introduction to convex optimization, first-order methods and their iteration complexity. Then we look at different techniques, which can...... be used with first-order methods such as smoothing, Lagrange multipliers and proximal gradient methods. We continue by presenting different applications of convex optimization and notable convex formulations with an emphasis on inverse problems and sparse signal processing. We also describe the multiple...
International Nuclear Information System (INIS)
Steinhaus, D.W.; Kline, J.V.; Bieniewski, T.M.; Dow, G.S.; Apel, C.T.
1979-01-01
An all-mirror optical system is used to direct the light from a variety of spectroscopic sources to two 2-m spectrographs that are placed on either side of a sturdy vertical mounting plate. The gratings were chosen so that the first spectrograph covers the ultraviolet spectral region, and the second spectrograph covers the ultraviolet, visible, and near-infrared regions. With the over 2.5 m of focal curves, each ultraviolet line is available at more than one place. Thus, problems with close lines can be overcome. The signals from a possible maximum of 256 photoelectric detectors go to a small computer for reading and calculation of the element abundances. To our knowledge, no other direct-reading spectrograph has more than about 100 fixed detectors. With an inductively-coupled-plasma source, our calibration curves, and detection limits, are similar to those of other workers using a direct-reading spectrograph
Steinhaus, David W.; Kline, John V.; Bieniewski, Thomas M.; Dow, Grove S.; Apel, Charles T.
1980-11-01
An all-mirror optical system is used to direct the light from a variety of spectroscopic sources to two 2-m spectrographs that are placed on either side of a sturdy vertical mounting plate. The gratings were chosen so that the first spectrograph covers the ultraviolet spectral region, and the second spectrograph covers the ultraviolet, visible, and near-infrared regions. With the over 2.5 m of focal curves, each ultraviolet line is available at more than one place. Thus, problems with close lines can be overcome. The signals from a possible maximum of 256 photoelectric detectors go to a small computer for reading and calculation of the element abundances. To our knowledge, no other direct-reading spectrograph has more than about 100 fixed detectors. With an inductively-coupled-plasma source, our calibration curves, and detection limits, are similar to those of other workers using a direct-reading spectrograph.
Chance-Constrained Guidance With Non-Convex Constraints
Ono, Masahiro
2011-01-01
Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of
Nonparametric instrumental regression with non-convex constraints
International Nuclear Information System (INIS)
Grasmair, M; Scherzer, O; Vanhems, A
2013-01-01
This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition. (paper)
Nonparametric instrumental regression with non-convex constraints
Grasmair, M.; Scherzer, O.; Vanhems, A.
2013-03-01
This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.
Convex analysis and monotone operator theory in Hilbert spaces
Bauschke, Heinz H
2017-01-01
This reference text, now in its second edition, offers a modern unifying presentation of three basic areas of nonlinear analysis: convex analysis, monotone operator theory, and the fixed point theory of nonexpansive operators. Taking a unique comprehensive approach, the theory is developed from the ground up, with the rich connections and interactions between the areas as the central focus, and it is illustrated by a large number of examples. The Hilbert space setting of the material offers a wide range of applications while avoiding the technical difficulties of general Banach spaces. The authors have also drawn upon recent advances and modern tools to simplify the proofs of key results making the book more accessible to a broader range of scholars and users. Combining a strong emphasis on applications with exceptionally lucid writing and an abundance of exercises, this text is of great value to a large audience including pure and applied mathematicians as well as researchers in engineering, data science, ma...
Reachability by paths of bounded curvature in a convex polygon
Ahn, Heekap; Cheong, Otfried; Matoušek, Jiřǐ; Vigneron, Antoine E.
2012-01-01
Let B be a point robot moving in the plane, whose path is constrained to forward motions with curvature at most 1, and let P be a convex polygon with n vertices. Given a starting configuration (a location and a direction of travel) for B inside P, we characterize the region of all points of P that can be reached by B, and show that it has complexity O(n). We give an O(n2) time algorithm to compute this region. We show that a point is reachable only if it can be reached by a path of type CCSCS, where C denotes a unit circle arc and S denotes a line segment. © 2011 Elsevier B.V.
Rocking convex array used for 3D synthetic aperture focusing
DEFF Research Database (Denmark)
Andresen, Henrik; Nikolov, Svetoslav; Pedersen, M M
2008-01-01
Volumetric imaging can be performed using 1D arrays in combination with mechanical motion. Outside the elevation focus of the array, the resolution and contrast quickly degrade compared to the azimuth plane, because of the fixed transducer focus. The purpose of this paper is to use synthetic...... aperture focusing (SAF) for enhancing the elevation focusing for a convex rocking array, to obtain a more isotropic point spread function. This paper presents further development of the SAF method, which can be used with curved array combined with a rocking motion. The method uses a virtual source (VS...... Kretztechnik, Zipf, Austria). The array has an elevation focus at 60 mm of depth, and the angular rocking velocity is up to 140deg/s. The scan sequence uses an fprf of 4500 - 7000 Hz allowing up to 15 cm of penetration. The full width at half max (FWHM) and main-lobe to side-lobe ratio (MLSL) is used...
Approximating convex Pareto surfaces in multiobjective radiotherapy planning
International Nuclear Information System (INIS)
Craft, David L.; Halabi, Tarek F.; Shih, Helen A.; Bortfeld, Thomas R.
2006-01-01
Radiotherapy planning involves inherent tradeoffs: the primary mission, to treat the tumor with a high, uniform dose, is in conflict with normal tissue sparing. We seek to understand these tradeoffs on a case-to-case basis, by computing for each patient a database of Pareto optimal plans. A treatment plan is Pareto optimal if there does not exist another plan which is better in every measurable dimension. The set of all such plans is called the Pareto optimal surface. This article presents an algorithm for computing well distributed points on the (convex) Pareto optimal surface of a multiobjective programming problem. The algorithm is applied to intensity-modulated radiation therapy inverse planning problems, and results of a prostate case and a skull base case are presented, in three and four dimensions, investigating tradeoffs between tumor coverage and critical organ sparing
Convex Relaxations for a Generalized Chan-Vese Model
Bae, Egil
2013-01-01
We revisit the Chan-Vese model of image segmentation with a focus on the encoding with several integer-valued labeling functions. We relate several representations with varying amount of complexity and demonstrate the connection to recent relaxations for product sets and to dual maxflow-based formulations. For some special cases, it can be shown that it is possible to guarantee binary minimizers. While this is not true in general, we show how to derive a convex approximation of the combinatorial problem for more than 4 phases. We also provide a method to avoid overcounting of boundaries in the original Chan-Vese model without departing from the efficient product-set representation. Finally, we derive an algorithm to solve the associated discretized problem, and demonstrate that it allows to obtain good approximations for the segmentation problem with various number of regions. © 2013 Springer-Verlag.
Neural network for solving convex quadratic bilevel programming problems.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie
2014-03-01
In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.
Entropies from Coarse-graining: Convex Polytopes vs. Ellipsoids
Directory of Open Access Journals (Sweden)
Nikos Kalogeropoulos
2015-09-01
Full Text Available We examine the Boltzmann/Gibbs/Shannon SBGS and the non-additive Havrda-Charvát/Daróczy/Cressie-Read/Tsallis Sq and the Kaniadakis κ-entropy Sκ from the viewpoint of coarse-graining, symplectic capacities and convexity. We argue that the functional form of such entropies can be ascribed to a discordance in phase-space coarse-graining between two generally different approaches: the Euclidean/Riemannian metric one that reflects independence and picks cubes as the fundamental cells in coarse-graining and the symplectic/canonical one that picks spheres/ellipsoids for this role. Our discussion is motivated by and confined to the behaviour of Hamiltonian systems of many degrees of freedom. We see that Dvoretzky’s theorem provides asymptotic estimates for the minimal dimension beyond which these two approaches are close to each other. We state and speculate about the role that dualities may play in this viewpoint.
Design and realization of the real-time spectrograph controller for LAMOST based on FPGA
Wang, Jianing; Wu, Liyan; Zeng, Yizhong; Dai, Songxin; Hu, Zhongwen; Zhu, Yongtian; Wang, Lei; Wu, Zhen; Chen, Yi
2008-08-01
A large Schmitt reflector telescope, Large Sky Area Multi-Object Fiber Spectroscopic Telescope(LAMOST), is being built in China, which has effective aperture of 4 meters and can observe the spectra of as many as 4000 objects simultaneously. To fit such a large amount of observational objects, the dispersion part is composed of a set of 16 multipurpose fiber-fed double-beam Schmidt spectrographs, of which each has about ten of moveable components realtimely accommodated and manipulated by a controller. An industrial Ethernet network connects those 16 spectrograph controllers. The light from stars is fed to the entrance slits of the spectrographs with optical fibers. In this paper, we mainly introduce the design and realization of our real-time controller for the spectrograph, our design using the technique of System On Programmable Chip (SOPC) based on Field Programmable Gate Array (FPGA) and then realizing the control of the spectrographs through NIOSII Soft Core Embedded Processor. We seal the stepper motor controller as intellectual property (IP) cores and reuse it, greatly simplifying the design process and then shortening the development time. Under the embedded operating system μC/OS-II, a multi-tasks control program has been well written to realize the real-time control of the moveable parts of the spectrographs. At present, a number of such controllers have been applied in the spectrograph of LAMOST.
Path Following in the Exact Penalty Method of Convex Programming.
Zhou, Hua; Lange, Kenneth
2015-07-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
Hermite-Hadamard type inequalities for GA-s-convex functions
Directory of Open Access Journals (Sweden)
İmdat İşcan
2014-10-01
Full Text Available In this paper, The author introduces the concepts of the GA-s-convex functions in the first sense and second sense and establishes some integral inequalities of Hermite-Hadamard type related to the GA-s-convex functions. Some applications to special means of real numbers are also given.
Guo, Peng; Cao, Jiannong; Zhang, Kui
2015-01-01
In critical event (e.g., fire or gas) monitoring applications of wireless sensor networks (WSNs), convex hull of the event region is an efficient tool in handling the usual tasks like event report, routes reconstruction and human motion planning. Existing works on estimating convex hull of event
de Klerk, E.; Laurent, M.
2011-01-01
The Lasserre hierarchy of semidefinite programming approximations to convex polynomial optimization problems is known to converge finitely under some assumptions. [J. B. Lasserre, Convexity in semialgebraic geometry and polynomial optimization, SIAM J. Optim., 19 (2009), pp. 1995–2014]. We give a
The Concept of Convexity in Fuzzy Set Theory | Rauf | Journal of the ...
African Journals Online (AJOL)
The notions of convex analysis are indispensable in theoretical and applied Mathematics especially in the study of Calculus where it has a natural generalization for the several variables case. This paper investigates the concept of Fuzzy set theory in relation to the idea of convexity. Some fundamental theorems were ...
Effect of dental arch convexity and type of archwire on frictional forces
Fourie, Zacharias; Ozcan, Mutlu; Sandham, John
Introduction: Friction measurements in orthodontics are often derived from models by using brackets placed on flat models with various straight wires. Dental arches are convex in some areas. The objectives of this study were to compare the frictional forces generated in conventional flat and convex
Groeneboom, P.; Jongbloed, G.; Wellner, J.A.
2001-01-01
A process associated with integrated Brownian motion is introduced that characterizes the limit behavior of nonparametric least squares and maximum likelihood estimators of convex functions and convex densities, respectively. We call this process “the invelope” and show that it is an almost surely
Two Solar Tornadoes Observed with the Interface Region Imaging Spectrograph
Yang, Zihao; Tian, Hui; Peter, Hardi; Su, Yang; Samanta, Tanmoy; Zhang, Jingwen; Chen, Yajie
2018-01-01
The barbs or legs of some prominences show an apparent motion of rotation, which are often termed solar tornadoes. It is under debate whether the apparent motion is a real rotating motion, or caused by oscillations or counter-streaming flows. We present analysis results from spectroscopic observations of two tornadoes by the Interface Region Imaging Spectrograph. Each tornado was observed for more than 2.5 hr. Doppler velocities are derived through a single Gaussian fit to the Mg II k 2796 Å and Si IV 1393 Å line profiles. We find coherent and stable redshifts and blueshifts adjacent to each other across the tornado axes, which appears to favor the interpretation of these tornadoes as rotating cool plasmas with temperatures of 104 K–105 K. This interpretation is further supported by simultaneous observations of the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory, which reveal periodic motions of dark structures in the tornadoes. Our results demonstrate that spectroscopic observations can provide key information to disentangle different physical processes in solar prominences.
Calibrating the SNfactory Integral Field Spectrograph (SNIFS) with SCALA
Küsters, Daniel; Lombardo, Simona; Kowalski, Marek; Aldering, Greg; Nordin, Jakob; Rigault, Mickael
2016-08-01
The SNIFS CALibration Apparatus (SCALA), a device to calibrate the Supernova Integral Field Spectrograph on the University Hawaii 2.2m telescope, was developed and installed in Spring 2014. SCALA produces an artificial planet with a diameter of 1° and a constant surface brightness. The wavelength of the beam can be tuned between 3200 Å and 10000 Å and has a bandwidth of 35 Å. The amount of light injected into the telescope is monitored with NIST calibrated photodiodes. SCALA was upgraded in 2015 with a mask installed at the entrance pupil of the UH88 telescope, ensuring that the illumination of the telescope by stars is similar to that of SCALA. With this setup, a first calibration run was performed in conjunction with the spectrophotometric observations of standard stars. We present first estimates for the expected systematic uncertainties of the in-situ calibration and discuss the results of tests that examine the influence of stray light produced in the optics.
SCALA: In situ calibration for integral field spectrographs
Lombardo, S.; Küsters, D.; Kowalski, M.; Aldering, G.; Antilogus, P.; Bailey, S.; Baltay, C.; Barbary, K.; Baugh, D.; Bongard, S.; Boone, K.; Buton, C.; Chen, J.; Chotard, N.; Copin, Y.; Dixon, S.; Fagrelius, P.; Feindt, U.; Fouchez, D.; Gangler, E.; Hayden, B.; Hillebrandt, W.; Hoffmann, A.; Kim, A. G.; Leget, P.-F.; McKay, L.; Nordin, J.; Pain, R.; Pécontal, E.; Pereira, R.; Perlmutter, S.; Rabinowitz, D.; Reif, K.; Rigault, M.; Rubin, D.; Runge, K.; Saunders, C.; Smadja, G.; Suzuki, N.; Taubenberger, S.; Tao, C.; Thomas, R. C.; Nearby Supernova Factory
2017-11-01
Aims: The scientific yield of current and future optical surveys is increasingly limited by systematic uncertainties in the flux calibration. This is the case for type Ia supernova (SN Ia) cosmology programs, where an improved calibration directly translates into improved cosmological constraints. Current methodology rests on models of stars. Here we aim to obtain flux calibration that is traceable to state-of-the-art detector-based calibration. Methods: We present the SNIFS Calibration Apparatus (SCALA), a color (relative) flux calibration system developed for the SuperNova integral field spectrograph (SNIFS), operating at the University of Hawaii 2.2 m (UH 88) telescope. Results: By comparing the color trend of the illumination generated by SCALA during two commissioning runs, and to previous laboratory measurements, we show that we can determine the light emitted by SCALA with a long-term repeatability better than 1%. We describe the calibration procedure necessary to control for system aging. We present measurements of the SNIFS throughput as estimated by SCALA observations. Conclusions: The SCALA calibration unit is now fully deployed at the UH 88 telescope, and with it color-calibration between 4000 Å and 9000 Å is stable at the percent level over a one-year baseline.
Spectrographic study of λ 4200 silicon particular stars
International Nuclear Information System (INIS)
Didelon, Pierre
1983-01-01
This research thesis reports a spectrographic study of sample of particular stars belonging to the Si(II) λ 4200 subgroup which builds up the hot end of conventional 'Ap,Bp' stars. Twenty snapshots taken at the Haute-Provence observatory have been studied and compared with the observation of 17 standard stars. All these snapshots have been digitalised and processed. This allowed the identification of lines which indicated the presence of gallium and the absence of manganese which contradicts the close correlation between these elements that was generally admitted. An inexplicable and until now non observed duplication of Si(II) lines has also been observed. The problem of spectral classification of these stars has been studied. In order to study the concerned stars without calculation of atmospheric models, a comparative method between group stars and reference stars has been used. Results are discussed and seem to indicate an erratic and non-correlated behaviour of light elements (C, Mg, Ca, Si), and a presence of heavier elements (Ga, Sr) and rare earths (Eu, Gd) only when elements of the iron peak are stronger [fr
Spectrographic determination of some rare earths in thorium compounds
International Nuclear Information System (INIS)
Brito, J. de.
1977-01-01
A method for spectrographic determination of Gd, Sm, Dy, Eu, Y, Yb, Tm and Lu in thorium compounds has been developed. Sensibilities of 0.01 μg rare earths/g Th02 were achieved. The rare earth elements were chromatographycally separated in a nitric acid-ether-cellulose system. The solvent mixture was prepared by dissolving 11% of concentrated nitric acid in ether. The method is based upon the sorption of the rare earths on activated cellulose, the elements being eluted together with 0.01 M HNO 3 . The retention of the 152 , 154 Eu used as tracer was 99,4%. The other elements showed recoveries varying from 95 to 99%. A direct carrier destillation procedure for the spectrochemical determination of the mentioned elements was used. Several concentrations of silver chloride were used to study the volatility behavior of the rare earths. 2%AgCl was added to the matrix as definite carrier, being lantanum selected as internal standard. The average coefficient of variation for this method was +- -+ 7%. The method has been appleid to the analysis of rare earths in thorium coumpounds prepared by Thorium Purification Pilot Plant at Atomic Energy Institute, Sao Paulo [pt
Auroral spectrograph data annals of the international geophysical year, v.25
Carrigan, Anne; Norman, S J
1964-01-01
Annals of the International Geophysical Year, Volume 25: Auroral Spectrograph Data is a five-chapter text that contains tabulations of auroral spectrograph data. The patrol spectrograph built by the Perkin-Elmer Corporation for the Aurora and Airglow Program of the IGY is a high-speed, low-dispersion, automatic instrument designed to photograph spectra of aurora occurring along a given magnetic meridian of the sky. Data from each spectral frame were recorded on an IBM punched card. The data recorded on the cards are printed onto the tabulations in this volume. These tabulations are available
Zhang, Yongjun; Lu, Zhixin
2017-10-01
Spectrum resources are very precious, so it is increasingly important to locate interference signals rapidly. Convex programming algorithms in wireless sensor networks are often used as localization algorithms. But in view of the traditional convex programming algorithm is too much overlap of wireless sensor nodes that bring low positioning accuracy, the paper proposed a new algorithm. Which is mainly based on the traditional convex programming algorithm, the spectrum car sends unmanned aerial vehicles (uses) that can be used to record data periodically along different trajectories. According to the probability density distribution, the positioning area is segmented to further reduce the location area. Because the algorithm only increases the communication process of the power value of the unknown node and the sensor node, the advantages of the convex programming algorithm are basically preserved to realize the simple and real-time performance. The experimental results show that the improved algorithm has a better positioning accuracy than the original convex programming algorithm.
Principles of crystallization, and methods of single crystal growth
International Nuclear Information System (INIS)
Chacra, T.
2010-01-01
Most of single crystals (monocrystals), have distinguished optical, electrical, or magnetic properties, which make from single crystals, key elements in most of technical modern devices, as they may be used as lenses, Prisms, or grating sin optical devises, or Filters in X-Ray and spectrographic devices, or conductors and semiconductors in electronic, and computer industries. Furthermore, Single crystals are used in transducer devices. Moreover, they are indispensable elements in Laser and Maser emission technology.Crystal Growth Technology (CGT), has started, and developed in the international Universities and scientific institutions, aiming at some of single crystals, which may have significant properties and industrial applications, that can attract the attention of international crystal growth centers, to adopt the industrial production and marketing of such crystals. Unfortunately, Arab universities generally, and Syrian universities specifically, do not give even the minimum interest, to this field of Science.The purpose of this work is to attract the attention of Crystallographers, Physicists and Chemists in the Arab universities and research centers to the importance of crystal growth, and to work on, in the first stage to establish simple, uncomplicated laboratories for the growth of single crystal. Such laboratories can be supplied with equipment, which are partly available or can be manufactured in the local market. Many references (Articles, Papers, Diagrams, etc..) has been studied, to conclude the most important theoretical principles of Phase transitions,especially of crystallization. The conclusions of this study, are summarized in three Principles; Thermodynamic-, Morphologic-, and Kinetic-Principles. The study is completed by a brief description of the main single crystal growth methods with sketches, of equipment used in each method, which can be considered as primary designs for the equipment, of a new crystal growth laboratory. (author)
Influence of microgravity on Ce-doped Bi12 SiO20 crystal defect
Indian Academy of Sciences (India)
TECS
studied by comparing space grown BSO crystal with ground grown one. These results show ... fractive properties (Aldrich et al 1971; Peltier and. Micheron ... The shape of interface changes from concave to convex by suppressing ... cations. Figure 1. Parts of Ce doped BSO crystals: (a) space growth and (b) ground growth.
Near InfraRed Imaging Spectrograph (NIRIS) for ground-based ...
Indian Academy of Sciences (India)
54
NIRIS is a large field-of-view imaging spectrograph which is sensitive to fluctuation in ..... enhancement over low-latitudes has been shown to be developed as a ..... step forward towards passive remote sensing of the mesospheric dynamics.
Successful "First Light" for VLT High-Resolution Spectrograph
1999-10-01
Great Research Prospects with UVES at KUEYEN A major new astronomical instrument for the ESO Very Large Telescope at Paranal (Chile), the UVES high-resolution spectrograph, has just made its first observations of astronomical objects. The astronomers are delighted with the quality of the spectra obtained at this moment of "First Light". Although much fine-tuning still has to be done, this early success promises well for new and exciting science projects with this large European research facility. Astronomical instruments at VLT KUEYEN The second VLT 8.2-m Unit Telescope, KUEYEN ("The Moon" in the Mapuche language), is in the process of being tuned to perfection before it will be "handed" over to the astronomers on April 1, 2000. The testing of the new giant telescope has been successfully completed. The latest pointing tests were very positive and, from real performance measurements covering the entire operating range of the telescope, the overall accuracy on the sky was found to be 0.85 arcsec (the RMS-value). This is an excellent result for any telescope and implies that KUEYEN (as is already the case for ANTU) will be able to acquire its future target objects securely and efficiently, thus saving precious observing time. This work has paved the way for the installation of large astronomical instruments at its three focal positions, all prototype facilities that are capable of catching the light from even very faint and distant celestial objects. The three instruments at KUEYEN are referred to by their acronyms UVES , FORS2 and FLAMES. They are all dedicated to the investigation of the spectroscopic properties of faint stars and galaxies in the Universe. The UVES instrument The first to be installed is the Ultraviolet Visual Echelle Spectrograph (UVES) that was built by ESO, with the collaboration of the Trieste Observatory (Italy) for the control software. Complete tests of its optical and mechanical components, as well as of its CCD detectors and of the complex
AN INTERFACE REGION IMAGING SPECTROGRAPH FIRST VIEW ON SOLAR SPICULES
Energy Technology Data Exchange (ETDEWEB)
Pereira, T. M. D.; De Pontieu, B.; Carlsson, M.; Hansteen, V. [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, NO-0315 Oslo (Norway); Tarbell, T. D.; Lemen, J.; Title, A.; Boerner, P.; Hurlburt, N.; Wülser, J. P.; Martínez-Sykora, J.; Kleint, L. [Lockheed Martin Solar and Astrophysics Laboratory, 3251 Hanover Street, Org. A021S, Bldg. 252, Palo Alto, CA 94304 (United States); Golub, L.; McKillop, S.; Reeves, K. K.; Saar, S.; Testa, P.; Tian, H. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Jaeggli, S.; Kankelborg, C., E-mail: tiago.pereira@astro.uio.no [Department of Physics, Montana State University, P.O. Box 173840, Bozeman, MT 59717 (United States)
2014-09-01
Solar spicules have eluded modelers and observers for decades. Since the discovery of the more energetic type II, spicules have become a heated topic but their contribution to the energy balance of the low solar atmosphere remains unknown. Here we give a first glimpse of what quiet-Sun spicules look like when observed with NASA's recently launched Interface Region Imaging Spectrograph (IRIS). Using IRIS spectra and filtergrams that sample the chromosphere and transition region, we compare the properties and evolution of spicules as observed in a coordinated campaign with Hinode and the Atmospheric Imaging Assembly. Our IRIS observations allow us to follow the thermal evolution of type II spicules and finally confirm that the fading of Ca II H spicules appears to be caused by rapid heating to higher temperatures. The IRIS spicules do not fade but continue evolving, reaching higher and falling back down after 500-800 s. Ca II H type II spicules are thus the initial stages of violent and hotter events that mostly remain invisible in Ca II H filtergrams. These events have very different properties from type I spicules, which show lower velocities and no fading from chromospheric passbands. The IRIS spectra of spicules show the same signature as their proposed disk counterparts, reinforcing earlier work. Spectroheliograms from spectral rasters also confirm that quiet-Sun spicules originate in bushes from the magnetic network. Our results suggest that type II spicules are indeed the site of vigorous heating (to at least transition region temperatures) along extensive parts of the upward moving spicular plasma.
Spectroscopic Characterization of GEO Satellites with Gunma LOW Resolution Spectrograph
Endo, T.; Ono, H.; Hosokawa, M.; Ando, T.; Takanezawa, T.; Hashimoto, O.
The spectroscopic observation is potentially a powerful tool for understanding the Geostationary Earth Orbit (GEO) objects. We present here the results of an investigation of energy spectra of GEO satellites obtained from a groundbased optical telescope. The spectroscopic observations were made from April to June 2016 with the Gunma LOW resolution Spectrograph and imager (GLOWS) at the Gunma Astronomical Observatory (GAO) in JAPAN. The observation targets consist of eleven different satellites: two weather satellites, four communications satellites, and five broadcasting satellites. All the spectra of those GEO satellites are inferred to be solar-like. A number of well-known absorption features such as H-alpha, H-beta, Na-D,water vapor and oxygen molecules are clearly seen in thewavelength range of 4,000 - 8,000 Å. For comparison, we calculated the intensity ratio of the spectra of GEO satellites to that of the Moon which is the natural satellite of the earth. As a result, the following characteristics were obtained. 1) Some variations are seen in the strength of absorption features of water vapor and oxygen originated by the telluric atmosphere, but any other characteristic absorption features were not found. 2) For all observed satellites, the intensity ratio of the spectrum of GEO satellites decrease as a function of wavelength or to be flat. It means that the spectral reflectance of satellite materials is bluer than that of the Moon. 3) A characteristic dip at around 4,800 Å is found in all observed spectra of a weather satellite. Based on these observations, it is indicated that the characteristics of the spectrum are mainly derived from the solar panels because the apparent area of the solar cell is probably larger than that of the satellite body.
International Nuclear Information System (INIS)
1989-03-01
The Workshop for Cascade, subtitled 'Physics Using Large Acceptance Spectrograph and Its Technical Considerations', was held on July 13, 1988 by the Nuclear Physics Research Center, Osaka University. The present proceedings carry a total of 18 reports, which are entitled 'RCNP Large Acceptance Spectrograph (plan)', 'Correlation Experiments with a System Consisting of a Small Number of Nucleons', 'Measurement of (d,d) and (d, 2 He) Reactions with Large Solid Angle Spectrograph', 'The (p,2p) and (p,pn) Reactions', 'Correlation Experiments with Large Acceptance Spectrograph', 'Efforts at Determination of Various Correlations in Alpha Particles', 'Two-Nucleon Correlation in Nucleus', 'A Study on Particle Migration Reaction with Broad-Band Spectrograph', 'Measurement of Response in Highly Excited State during Nucleon Migration Reaction', 'A Study on Δ-Excitation within Nucleus', 'A Few Problems Related with Response in Highly Excited State', 'Spin-Isospin Modes in Continuum', '(p,π) and (p,xπ) Reactions', 'Formation of π - in (p,2p) Reaction', 'Formation of π-Mesonic Atom with Consistent Momentum', 'Measurement of Excitation Functions by Means of 'Inconsistent' Dispersion in Magnetic Spectrograph', 'Deeply Bound π - States by 'π - Transfer' (n,p) Reactions', and 'On High Resolution (n,p) Facilities'. (N.K.)
Precision platform for convex lens-induced confinement microscopy
Berard, Daniel; McFaul, Christopher M. J.; Leith, Jason S.; Arsenault, Adriel K. J.; Michaud, François; Leslie, Sabrina R.
2013-10-01
We present the conception, fabrication, and demonstration of a versatile, computer-controlled microscopy device which transforms a standard inverted fluorescence microscope into a precision single-molecule imaging station. The device uses the principle of convex lens-induced confinement [S. R. Leslie, A. P. Fields, and A. E. Cohen, Anal. Chem. 82, 6224 (2010)], which employs a tunable imaging chamber to enhance background rejection and extend diffusion-limited observation periods. Using nanopositioning stages, this device achieves repeatable and dynamic control over the geometry of the sample chamber on scales as small as the size of individual molecules, enabling regulation of their configurations and dynamics. Using microfluidics, this device enables serial insertion as well as sample recovery, facilitating temporally controlled, high-throughput measurements of multiple reagents. We report on the simulation and experimental characterization of this tunable chamber geometry, and its influence upon the diffusion and conformations of DNA molecules over extended observation periods. This new microscopy platform has the potential to capture, probe, and influence the configurations of single molecules, with dramatically improved imaging conditions in comparison to existing technologies. These capabilities are of immediate interest to a wide range of research and industry sectors in biotechnology, biophysics, materials, and chemistry.
Neural network for nonsmooth pseudoconvex optimization with general convex constraints.
Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping
2018-05-01
In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.
On asphericity of convex bodies in linear normed spaces.
Faried, Nashat; Morsy, Ahmed; Hussein, Aya M
2018-01-01
In 1960, Dvoretzky proved that in any infinite dimensional Banach space X and for any [Formula: see text] there exists a subspace L of X of arbitrary large dimension ϵ -iometric to Euclidean space. A main tool in proving this deep result was some results concerning asphericity of convex bodies. In this work, we introduce a simple technique and rigorous formulas to facilitate calculating the asphericity for each set that has a nonempty boundary set with respect to the flat space generated by it. We also give a formula to determine the center and the radius of the smallest ball containing a nonempty nonsingleton set K in a linear normed space, and the center and the radius of the largest ball contained in it provided that K has a nonempty boundary set with respect to the flat space generated by it. As an application we give lower and upper estimations for the asphericity of infinite and finite cross products of these sets in certain spaces, respectively.
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
Link Prediction via Convex Nonnegative Matrix Factorization on Multiscale Blocks
Directory of Open Access Journals (Sweden)
Enming Dong
2014-01-01
Full Text Available Low rank matrices approximations have been used in link prediction for networks, which are usually global optimal methods and lack of using the local information. The block structure is a significant local feature of matrices: entities in the same block have similar values, which implies that links are more likely to be found within dense blocks. We use this insight to give a probabilistic latent variable model for finding missing links by convex nonnegative matrix factorization with block detection. The experiments show that this method gives better prediction accuracy than original method alone. Different from the original low rank matrices approximations methods for link prediction, the sparseness of solutions is in accord with the sparse property for most real complex networks. Scaling to massive size network, we use the block information mapping matrices onto distributed architectures and give a divide-and-conquer prediction method. The experiments show that it gives better results than common neighbors method when the networks have a large number of missing links.
Convergence theorems for quasi-contractive maps in uniformly convex spaces
International Nuclear Information System (INIS)
Chidume, C.E.; Osilike, M.O.
1992-04-01
Let K be a nonempty closed convex and bounded subset of a real uniformly convex Banach space E of modulus of convexity of power type q≥2. Let T by a quasi-contractive mapping of K into itself. It is proved that each of two well known fixed point iteration methods (the Mann and the Ishikawa iteration methods) converges strongly, without any compactness assumption on the domain of the map, to the unique fixed point of T in K. Our theorems generalize important known results. (author). 22 refs
A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.
Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen
2018-03-01
In this paper, based on calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.
Groenwold, A.A.; Wood, D.W.; Etman, L.F.P.; Tosserams, S.
2009-01-01
We implement and test a globally convergent sequential approximate optimization algorithm based on (convexified) diagonal quadratic approximations. The algorithm resides in the class of globally convergent optimization methods based on conservative convex separable approximations developed by
International Nuclear Information System (INIS)
Saint-Cyr, B.
2011-01-01
We model in this work granular materials composed of non-convex and cohesive aggregates, in view of application to the rheology of UO 2 powders. The effect of non convexity is analyzed in terms of bulk quantities (Coulomb internal friction and cohesion) and micromechanical parameters such as texture anisotropy and force transmission. In particular, we find that the packing fraction evolves in a complex manner with the shape non convexity and the shear strength increases but saturates due to interlocking between the aggregates. We introduce simple models to describe these features in terms of micro-mechanical parameters. Furthermore, a systematic investigation of shearing, uniaxial compaction and simple compression of cohesive packings show that bulk cohesion increases with non-convexity but is strongly influenced by the boundary conditions and shear bands or stress concentration. (author) [fr
Convex integration theory solutions to the h-principle in geometry and topology
Spring, David
1998-01-01
This book provides a comprehensive study of convex integration theory in immersion-theoretic topology. Convex integration theory, developed originally by M. Gromov, provides general topological methods for solving the h-principle for a wide variety of problems in differential geometry and topology, with applications also to PDE theory and to optimal control theory. Though topological in nature, the theory is based on a precise analytical approximation result for higher order derivatives of functions, proved by M. Gromov. This book is the first to present an exacting record and exposition of all of the basic concepts and technical results of convex integration theory in higher order jet spaces, including the theory of iterated convex hull extensions and the theory of relative h-principles. A second feature of the book is its detailed presentation of applications of the general theory to topics in symplectic topology, divergence free vector fields on 3-manifolds, isometric immersions, totally real embeddings, u...
A Sufficient Condition on Convex Relaxation of AC Optimal Power Flow in Distribution Networks
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei; Wang, Jianhui
2016-01-01
This paper proposes a sufficient condition for the convex relaxation of AC Optimal Power Flow (OPF) in radial distribution networks as a second order cone program (SOCP) to be exact. The condition requires that the allowed reverse power flow is only reactive or active, or none. Under the proposed...... solution of the SOCP can be converted to an optimal solution of the original AC OPF. The efficacy of the convex relaxation to solve the AC OPF is demonstrated by case studies of an optimal multi-period planning problem of electric vehicles (EVs) in distribution networks....... sufficient condition, the feasible sub-injection region (power injections of nodes excluding the root node) of the AC OPF is convex. The exactness of the convex relaxation under the proposed condition is proved through constructing a group of monotonic series with limits, which ensures that the optimal...
Graph Design via Convex Optimization: Online and Distributed Perspectives
Meng, De
Network and graph have long been natural abstraction of relations in a variety of applications, e.g. transportation, power system, social network, communication, electrical circuit, etc. As a large number of computation and optimization problems are naturally defined on graphs, graph structures not only enable important properties of these problems, but also leads to highly efficient distributed and online algorithms. For example, graph separability enables the parallelism for computation and operation as well as limits the size of local problems. More interestingly, graphs can be defined and constructed in order to take best advantage of those problem properties. This dissertation focuses on graph structure and design in newly proposed optimization problems, which establish a bridge between graph properties and optimization problem properties. We first study a new optimization problem called Geodesic Distance Maximization Problem (GDMP). Given a graph with fixed edge weights, finding the shortest path, also known as the geodesic, between two nodes is a well-studied network flow problem. We introduce the Geodesic Distance Maximization Problem (GDMP): the problem of finding the edge weights that maximize the length of the geodesic subject to convex constraints on the weights. We show that GDMP is a convex optimization problem for a wide class of flow costs, and provide a physical interpretation using the dual. We present applications of the GDMP in various fields, including optical lens design, network interdiction, and resource allocation in the control of forest fires. We develop an Alternating Direction Method of Multipliers (ADMM) by exploiting specific problem structures to solve large-scale GDMP, and demonstrate its effectiveness in numerical examples. We then turn our attention to distributed optimization on graph with only local communication. Distributed optimization arises in a variety of applications, e.g. distributed tracking and localization, estimation
A one-dimensional gravitationally interacting gas and the convex minorant of Brownian motion
International Nuclear Information System (INIS)
Suidan, T M
2001-01-01
The surprising connection between a one-dimensional gravitationally interacting gas of sticky particles and the convex minorant process generated by Brownian motion on [0,1] is studied. A study is made of the dynamics of this 1-D gas system by identifying three distinct clustering regimes and the time scales at which they occur. At the critical moment of time the mass distribution of the gas can be computed in terms of functionals of the convex minorant process
Directory of Open Access Journals (Sweden)
Weilin Nie
2017-01-01
Full Text Available Abstract Convex risk minimization is a commonly used setting in learning theory. In this paper, we firstly give a perturbation analysis for such algorithms, and then we apply this result to differential private learning algorithms. Our analysis needs the objective functions to be strongly convex. This leads to an extension of our previous analysis to the non-differentiable loss functions, when constructing differential private algorithms. Finally, an error analysis is then provided to show the selection for the parameters.
A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.
Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo
2018-04-01
Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.
Pattern Discovery in Brain Imaging Genetics via SCCA Modeling with a Generic Non-convex Penalty.
Du, Lei; Liu, Kefei; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Han, Junwei; Guo, Lei; Saykin, Andrew J; Shen, Li
2017-10-25
Brain imaging genetics intends to uncover associations between genetic markers and neuroimaging quantitative traits. Sparse canonical correlation analysis (SCCA) can discover bi-multivariate associations and select relevant features, and is becoming popular in imaging genetic studies. The L1-norm function is not only convex, but also singular at the origin, which is a necessary condition for sparsity. Thus most SCCA methods impose [Formula: see text]-norm onto the individual feature or the structure level of features to pursuit corresponding sparsity. However, the [Formula: see text]-norm penalty over-penalizes large coefficients and may incurs estimation bias. A number of non-convex penalties are proposed to reduce the estimation bias in regression tasks. But using them in SCCA remains largely unexplored. In this paper, we design a unified non-convex SCCA model, based on seven non-convex functions, for unbiased estimation and stable feature selection simultaneously. We also propose an efficient optimization algorithm. The proposed method obtains both higher correlation coefficients and better canonical loading patterns. Specifically, these SCCA methods with non-convex penalties discover a strong association between the APOE e4 rs429358 SNP and the hippocampus region of the brain. They both are Alzheimer's disease related biomarkers, indicating the potential and power of the non-convex methods in brain imaging genetics.
Study on IAEA international emergency response exercise convEx-3
International Nuclear Information System (INIS)
Yamamoto, Kazuya
2007-05-01
The International Atomic Energy Agency (IAEA) carried out a large-scale international emergency response exercise in 2005 under the designated name of ConvEx-3(2005), at Romania. This review report summarizes a study about ConvEx-3(2005) based on several related open literature. The ConvEx-3 was conducted in accordance with Agency's safety standard series and requirements in the field of Emergency Preparedness and Response. The study on the preparation, conduct and evaluation of ConvEx-3(2005) exercise is expected to provide very useful knowledge for development of drills and educational programs conducted by Nuclear Emergency Assistance and Training Center (NEAT). Especially, study on the exercise evaluations is instrumental in improving evaluations of drills planned by the national government and local governments. As international cooperation among Asian countries in the field of nuclear emergency preparedness and response is going to realize, it is very useful to survey and consider scheme and methodology about international emergency preparedness, response and exercise referring the knowledge of this ConvEx-3 study. The lessons learned from this study of ConvEx-3(2005) are summarized in four chapters; methodology of exercises and educational programs, exercise evaluation process, amendments/verification of the emergency response plan of NEAT, and technical issues of systems for emergency response and assistance of NEAT relevant to interface for international emergency communication. (author)
The problem of scattering in fibre-fed VPH spectrographs and possible solutions
Ellis, S. C.; Saunders, Will; Betters, Chris; Croom, Scott
2014-07-01
All spectrographs unavoidably scatter light. Scattering in the spectral direction is problematic for sky subtraction, since atmospheric spectral lines are blurred. Scattering in the spatial direction is problematic for fibre fed spectrographs, since it limits how closely fibres can be packed together. We investigate the nature of this scattering and show that the scattering wings have both a Lorentzian component, and a shallower (1/r) component. We investigate the causes of this from a theoretical perspective, and argue that for the spectral PSF the Lorentzian wings are in part due to the profile of the illumination of the pupil of the spectrograph onto the diffraction grating, whereas the shallower component is from bulk scattering. We then investigate ways to mitigate the diffractive scattering by apodising the pupil. In the ideal case of a Gaussian apodised pupil, the scattering can be significantly improved. Finally we look at realistic models of the spectrograph pupils of fibre fed spectrographs with a centrally obstructed telescope, and show that it is possible to apodise the pupil through non-telecentric injection into the fibre.
National Research Council Canada - National Science Library
Graham, James R; Abrams, Mark; Bennett, C; Carr, J; Cook, K; Dey, A; Najita, J; Wishnow, E
1998-01-01
.... We consider the relationship between pixel size, spectral resolution, and diameter of the beam splitter for imaging and nonimaging Fourier transform spectrographs and give the condition required...
A soft X-Ray flat field grating spectrograph and its experimental applications
International Nuclear Information System (INIS)
Ni Yuanlong; Mao Chusheng
2001-01-01
The principle, structure, and application results of a flat field grating spectrograph for X-ray laser research is presented. There are two kinds of the spectrograph. One uses a varied space grating with nominal line spacing 1200 l/mm, the spectral detection range is 5 - 50 nm, and another uses a 2400 l/mm varied line space grating, detection range is 1 - 10 nm. The experimental results of the former is introduced only. Both experimental results of this instrument using the soft X-ray film and a streak camera as the detecting elements are given. The spectral resolutions are 0.01 nm and 0.05 nm, respectively. The temporal resolution is 30 ps. Finally, the stigmatic structure of the spectrograph is introduced, which uses cylindrical mirror and spherical mirror as a focusing system. The magnification is 5, spatial resolution is 25 μm. The experimental results are given as well
Proposal for the ion optics and for the kinematical fitting at the magnetic spectrograph BIG KARL
International Nuclear Information System (INIS)
Hinterberger, F.
1986-01-01
For the magnetic spectrograph BIG KARL the installation of an additional quadrupole lens is purposed. From this the possibility of a telescopic ion optic results. For future experiments a standard focusing with a spatial dispersion of 6.6 m and vanishing angular dispersion is proposed. The D/M ratio (dispersion/magnification) extends to 14.0 m, the maximal spatial angle lies at 3 msr. The energy range extends at a focal plane length of 0.66 m to 20%. For the kinematical fitting of the spectrograph the focal plane is shifted. This shift can be simply and rapidly realized for different K values by means of a software correction, if generally two spatial spectra in the focal plane are taken up. By this additionally for each event the actual scattering angle can be determined with relatively good resolution. The dispersion fit is completely decoupled from the kinematical fitting of the magnetic spectrograph. (orig.) [de
X-ray spectrometer spectrograph telescope system. [for solar corona study
Bruner, E. C., Jr.; Acton, L. W.; Brown, W. A.; Salat, S. W.; Franks, A.; Schmidtke, G.; Schweizer, W.; Speer, R. J.
1979-01-01
A new sounding rocket payload that has been developed for X-ray spectroscopic studies of the solar corona is described. The instrument incorporates a grazing incidence Rowland mounted grating spectrograph and an extreme off-axis paraboloic sector feed system to isolate regions of the sun of order 1 x 10 arc seconds in size. The focal surface of the spectrograph is shared by photographic and photoelectric detection systems, with the latter serving as a part of the rocket pointing system control loop. Fabrication and alignment of the optical system is based on high precision machining and mechanical metrology techniques. The spectrograph has a resolution of 16 milliangstroms and modifications planned for future flights will improve the resolution to 5 milliangstroms, permitting line widths to be measured.
Soft x-ray spectrographs for solar observations
International Nuclear Information System (INIS)
Bruner, M.E.
1988-01-01
This paper surveys some of the recent advances in the state of the art of soft X-ray spectrometers, particularly as they might be applied to Solar Observations. The discussions center on the windowless region from roughly 1 to 100 A, and covers both grating and crystal instruments. The author begins with a short discussion of the solar soft X-ray spectrum and its interpretation, followed by a few general comments on problems peculiar to soft X-ray instruments. The paper reviews of recent developments in spectrometer optical design, which has been a lively field during the last dozen years. This is particularly true in the case of grating spectrometers. The paper concludes with a short section on telescope considerations, and some remarks on future flight opportunities
Performance testing of an off-plane reflection grating and silicon pore optic spectrograph at PANTER
Marlowe, Hannah; McEntaffer, Randall L.; Allured, Ryan; DeRoo, Casey T.; Donovan, Benjamin D.; Miles, Drew M.; Tutt, James H.; Burwitz, Vadim; Menz, Benedikt; Hartner, Gisela D.; Smith, Randall K.; Cheimets, Peter; Hertz, Edward; Bookbinder, Jay A.; Günther, Ramses; Yanson, Alex; Vacanti, Giuseppe; Ackermann, Marcelo
2015-10-01
An x-ray spectrograph consisting of aligned, radially ruled off-plane reflection gratings and silicon pore optics (SPO) was tested at the Max Planck Institute for Extraterrestrial Physics PANTER x-ray test facility. SPO is a test module for the proposed Arcus mission, which will also feature aligned off-plane reflection gratings. This test is the first time two off-plane gratings were actively aligned to each other and with an SPO to produce an overlapped spectrum. We report the performance of the complete spectrograph utilizing the aligned gratings module and plans for future development.
Spectra of Th/Ar and U/Ne hollow cathode lamps for spectrograph calibration
Nave, Gillian; Shlosberg, Ariel; Kerber, Florian; Den Hartog, Elizabeth; Neureiter, Bianca
2018-01-01
Low-current Th/Ar hollow cathode lamps have long been used for calibration of astronomical spectrographs on ground-based telescopes. Thorium is an attractive element for calibration as it has a single isotope, has narrow spectral lines, and has a dense spectrum covering the whole of the visible region. However, the high density of the spectrum that makes it attractive for calibrating high-resolution spectrographs is a detriment for lower resolution spectrographs and this is not obvious by examination of existing linelists. In addition, recent changes in regulations regarding the handling of thorium have led to a degradation in the quality of Th/Ar calibration lamps, with contamination by molecular ThO lines that are strong enough to obscure the calibration lines of interest.We are pursuing two approaches to these problems. First, we have expanded and improved the NIST Standard Reference Database 161, "Spectrum of Th-Ar Hollow Cathode Lamps" to cover the region 272 nm to 5500 nm. Spectra of hollow cathode lamps at up to 3 different currents can now be displayed simultaneously. Interactive zooming and the ability to convolve any of the spectra with a Gaussian or uploaded instrument profile enable the user to see immediately what the spectrum would look like at the particular resolution of their spectrograph. Second, we have measured the spectrum of a recent, contaminated Th/Ar hollow cathode lamp using a high-resolution Echelle spectrograph (Madison Wisconsin) at a resolving power (R~ 250,000). This significantly exceeds the resolving power of most astronomical spectrographs and resolves many of the molecular lines of ThO. With these spectra we are measuring and calibrating the positions of these molecular lines in order to make them suitable for spectrograph calibration.In the near infrared region, U/Ne hollow cathode lamps give a higher density of calibration lines than Th/Ar lamps and will be implemented on the upgraded CRIRES+ spectrograph on ESO’s Very Large
Observations of the radial velocity of the Sun as measured with the novel SONG spectrograph
DEFF Research Database (Denmark)
Pallé, P. L.; Grundahl, F.; Hage, A. Triviño
2013-01-01
Deployment of the prototype node of the SONG project took place in April 2012 at Observatorio del Teide (Canary Islands). Its key instrument (echelle spectrograph) was installed and operational a few weeks later while its 1 m feeding telescope suffered a considerable delay to meet the required...... specifications. Using a fibre-feed, solar light could be fed to the spectrograph and we carried out a 1-week observing campaign in June 2012 to evaluate its performance for measuring precision radial velocities. In this work we present the first results of this campaign by comparing the sensitivity of the SONG...
The Oxford SWIFT Spectrograph: first commissioning and on-sky results
Thatte, Niranjan; Tecza, Mathias; Clarke, Fraser; Goodsall, Timothy; Fogarty, Lisa; Houghton, Ryan; Salter, Graeme; Scott, Nicholas; Davies, Roger L.; Bouchez, Antonin; Dekany, Richard
2010-01-01
The Oxford SWIFT spectrograph, an I & z band (6500-10500 A) integral field spectrograph, is designed to operate as a facility instrument at the 200 inch Hale Telescope on Palomar Mountain, in conjunction with the Palomar laser guide star adaptive optics system PALAO (and its upgrade to PALM3000). SWIFT provides spectra at R(≡λ/▵λ)~4000 of a contiguous two-dimensional field, 44 x 89 spatial pixels (spaxels) in size, at spatial scales of 0.235";, 0.16", and 0.08" per spaxel. It employs two 250μ...
Energy Technology Data Exchange (ETDEWEB)
Capdevila, C; Alvarez, F
1962-07-01
A spectrographic method was developed for the determination of 18 trace elements in lanthanum, cerium, praseodimium, neodimium and samarium compounds. The concentrations of the impurities cover the range of 0,5 to 500 ppm. Most of these impurities are determined by the carrier distillation method. Several more refractory elements have been determined by total burning of the sample with a direct current arc or by the conduction briquet excitation technique with a high voltage condensed spark. The work has been carried out with a Hilger Automatic Large Quartz Spectrograph. (Author) 5 refs.
bHROS: A New High-Resolution Spectrograph Available on Gemini South
Margheim, S. J.; Gemini bHROS Team
2005-12-01
The Gemini bench-mounted High-Resolution Spectrograph (bHROS) is available for science programs beginning in 2006A. bHROS is the highest resolution (R=150,000) optical echelle spectrograph optimized for use on an 8-meter telescope. bHROS is fiber-fed via GMOS-S from the Gemini South focal plane and is available in both a dual-fiber Object/Sky mode and a single (larger) Object-only mode. Instrument characteristics and sample data taken during commissioning will be presented.
International Nuclear Information System (INIS)
Phan Thanh An
2008-06-01
The convex rope problem, posed by Peshkin and Sanderson in IEEE J. Robotics Automat, 2 (1986) pp. 53-58, is to find the counterclockwise and clockwise convex ropes starting at the vertex a and ending at the vertex b of a simple polygon, where a is on the boundary of the convex hull of the polygon and b is visible from infinity. In this paper, we present a linear time algorithm for solving this problem without resorting to a linear-time triangulation algorithm and without resorting to a convex hull algorithm for the polygon. The counterclockwise (clockwise, respectively) convex rope consists of two polylines obtained in a basic incremental strategy described in convex hull algorithms for the polylines forming the polygon from a to b. (author)
On the complexity of a combined homotopy interior method for convex programming
Yu, Bo; Xu, Qing; Feng, Guochen
2007-03-01
In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.
Most Efficient Spectrograph to Shoot the Southern Skies
2009-05-01
-shooter, for a total of 350 observing nights, making it the second most requested instrument at the Very Large Telescope in this period. More information ESO's Very Large Telescope (VLT) is the world's most advanced optical instrument. It is an ensemble of four 8.2-metre telescopes located at the Paranal Observatory on an isolated mountain peak in the Atacama Desert in North Chile. The four 8.2-metre telescopes have a total of 12 focal stations where different instruments for imaging and spectroscopic observations are installed and a special station where the light of the four telescopes is combined for interferometric observations. The first VLT instrument was installed in 1998 and has been followed by 12 more in the last 10 years, distributed at the different focal stations. X-shooter is the first of the second generation of VLT instruments and replaces the workhorse-instrument FORS1, which has been successfully used for more than ten years by hundreds of astronomers. X-shooter operates at the Cassegrain focus of the Kueyen telescope (UT2). In response to an ESO Call for Proposals for second generation VLT instrumentation, ESO received three proposals for an intermediate resolution, high efficiency spectrograph. These were eventually merged into a single proposal around the present concept of X-shooter, which was approved for construction in November 2003. The Final Design Review, at which the instrument design is finalised and declared ready for construction, took place in April 2006. The first observations with the instrument at the telescope in its full configuration were on 14 March 2009. X-shooter is a joint project by Denmark, France, Italy, the Netherlands and ESO. The collaborating institutes in Denmark are the Niels Bohr and the DARK Institutes of the University of Copenhagen and the National Space Institute (Technical University of Denmark); in France GEPI at the Observatoire de Paris and APC at the Université D. Diderot, with contributions from the CEA and the
Project overview of OPTIMOS-EVE: the fibre-fed multi-object spectrograph for the E-ELT
Navarro, R.; Chemla, F.; Bonifacio, P.; Flores, H.; Guinouard, I.; Huet, J.-M.; Puech, M.; Royer, F.; Pragt, J.H.; Wulterkens, G.; Sawyer, E.C.; Caldwell, M.E.; Tosh, I.A.J.; Whalley, M.S.; Woodhouse, G.F.W.; Spanò, P.; Di Marcantonio, P.; Andersen, M.I.; Dalton, G.B.; Kaper, L.; Hammer, F.
2010-01-01
OPTIMOS-EVE (OPTical Infrared Multi Object Spectrograph - Extreme Visual Explorer) is the fibre fed multi object spectrograph proposed for the European Extremely Large Telescope (E-ELT), planned to be operational in 2018 at Cerro Armazones (Chile). It is designed to provide a spectral resolution of
Quintana-Lara, Marcela
2014-01-01
This study investigates the effects of Acoustic Spectrographic Instruction on the production of the English phonological contrast /i/ and / I /. Acoustic Spectrographic Instruction is based on the assumption that physical representations of speech sounds and spectrography allow learners to objectively see and modify those non-accurate features in…
The Coude spectrograph and echelle scanner of the 2.7 m telescope at McDonald observatory
Tull, R. G.
1972-01-01
The design of the Coude spectrograph of the 2.7 m McDonald telescope is discussed. A description is given of the Coude scanner which uses the spectrograph optics, the configuration of the large echelle and the computer scanner control and data systems.
Evaluation of spectrographic standards for the carrier-distillation analysis of PuO2
International Nuclear Information System (INIS)
Martell, C.J.; Myers, W.M.
1976-05-01
Three plutonium metals whose impurity contents have been accurately determined are used to evaluate spectrographic standards. Best results are obtained when (1) highly impure samples are diluted, (2) the internal standard, cobalt, is used, (3) a linear curve is fitted to the standard data that bracket the impurity concentration, and (4) plutonium standards containing 22 impurities are used
International Nuclear Information System (INIS)
Buffereau, M.; Deniaud, S.; Pichotin, B.; Violet, R.
1965-01-01
One studies improvement of spectrographic analysis by the 'carrier distillation' method with the help of a mechanical device. Experiments and advantages of such an apparatus are given (precision and reproducibility improvement, operator factor suppression). A routine apparatus (French patent no 976.493) is described. (authors) [fr
Performances of X-shooter, the new wide-band intermediate resolution spectrograph at the VLT
Vernet, J.; Dekker, H.; D'Odorico, S.; Mason, E.; Di Marcantonio, P.; Downing, M.; Elswijk, E.; Finger, G.; Fischer, G.; Kerber, F.; Kern, L.; Lizon, J.-L.; Lucuix, C.; Mainieri, V.; Modigliani, A.; Patat, F.; Ramsay, S.; Santin, P.; Vidali, M.; Groot, P.; Guinouard, I.; Hammer, F.; Kaper, L.; Kjærgaard-Rasmussen, P.; Navarro, R.; Randich, S.; Zerbi, F.
2010-01-01
X-shooter is the first second-generation instrument newly commissioned a the VLT. It is a high efficiency single target intermediate resolution spectrograph covering the range 300 - 2500 nm in a single shot. We summarize the main characteristics of the instrument and present its performances as
X-shooter, the new wide band intermediate resolution spectrograph at the ESO Very Large Telescope
Vernet, J.; Dekker, H.; D'Odorico, S.; Kaper, L.; Kjaergaard, P.; Hammer, F.; Randich, S.; Zerbi, F.; Groot, P.J.; Hjorth, J.; Guinouard, I.; Navarro, R.; Adolfse, T.; Albers, P.W.; Amans, J.-P.; Andersen, J.J.; Andersen, M.I.; Binetruy, P.; Bristow, P.; Castillo, R.; Chemla, F.; Christensen, L.; Conconi, P.; Conzelmann, R.; Dam, J.; De Caprio, V.; de Ugarte Postigo, A.; Delabre, B.; Di Marcantonio, P.; Downing, M.; Elswijk, E.; Finger, G.; Fischer, G.; Flores, H.; FranÃ§ois, P.; Goldoni, P.; Guglielmi, L.; Haigron, R.; Hanenburg, H.; Hendriks, I.; Horrobin, M.; Horville, D.; Jessen, N.C.; Kerber, F.; Kern, L.; Kiekebusch, M.; Kleszcz, P.; Klougart, J.; Kragt, J.; Larsen, H.H.; Lizon, J.-L.; Lucuix, C.; Mainieri, V.; Manuputy, R.; Martayan, C.; Mason, E.; Mazzoleni, R.; Michaelsen, N.; Modigliani, A.; Moehler, S.; Møller, P.; Norup Sørensen, A.; Nørregaard, P.; Péroux, C.; Patat, F.; Pena, E.; Pragt, J.; Reinero, C.; Rigal, F.; Riva, M.; Roelfsema, R.; Royer, F.; Sacco, G.; Santin, P.; Schoenmaker, T.; Spano, P.; Sweers, E.; ter Horst, R.; Tintori, M.; Tromp, N.; van Dael, P.; van Vliet, H.; Venema, L.; Vidali, M.; Vinther, J.; Vola, P.; Winters, R.; Wistisen, D.; Wulterkens, G.; Zacchei, A.
2011-01-01
X-shooter is the first 2nd generation instrument of the ESO Very Large Telescope (VLT). It is a very efficient, single-target, intermediate-resolution spectrograph that was installed at the Cassegrain focus of UT2 in 2009. The instrument covers, in a single exposure, the spectral range from 300 to
X-shooter, the new wide band intermediate resolution spectrograph at the ESO Very Large Telescope
DEFF Research Database (Denmark)
Vernet, J.; Dekker, H.; D'Odorico, S.
2011-01-01
X-shooter is the first 2nd generation instrument of the ESO Very Large Telescope (VLT). It is a very efficient, single-target, intermediate-resolution spectrograph that was installed at the Cassegrain focus of UT2 in 2009. The instrument covers, in a single exposure, the spectral range from 300 t...
Technical aspects of the Space Telescope Imaging Spectrograph Repair (STIS-R)
Rinehart, S. A.; Domber, J.; Faulkner, T.; Gull, T.; Kimble, R.; Klappenberger, M.; Leckrone, D.; Niedner, M.; Proffitt, C.; Smith, H.; Woodgate, B.
2008-07-01
In August 2004, the Hubble Space Telescope (HST) Space Telescope Imaging Spectrograph (STIS) ceased operation due to a failure of the 5V mechanism power converter in the Side 2 Low Voltage Power Supply (LVPS2). The failure precluded movement of any STIS mechanism and, because of the earlier (2001) loss of the Side 1 electronics chain, left the instrument shuttered and in safe mode after 7.5 years of science operations. A team was assembled to analyze the fault and to determine if STIS repair (STIS-R) was feasible. The team conclusively pinpointed the Side 2 failure to the 5V mechanism converter, and began studying EVA techniques for opening STIS during Servicing Mission 4 (SM4) to replace the failed LVPS2 board. The restoration of STIS functionality via surgical repair by astronauts has by now reached a mature and final design state, and will, along with a similar repair procedure for the Advanced Camera for Surveys (ACS), represent a first for Hubble servicing. STIS-R will restore full scientific functionality of the spectrograph on Side 2, while Side 1 will remain inoperative. Because of the high degree of complementarity between STIS and the new Cosmic Origins Spectrograph (COS, to be installed during SM4)), successful repair of the older spectrograph is an important scientific objective. In this presentation, we focus on the technical aspects associated with STIS-R.
Quantitative imaging through a spectrograph : 2. stoichiometry mapping by Raman scattering
Tolboom, R.A.L.; Dam, N.J.; Meulen, ter J.J.
2004-01-01
The Bayesian deconvolution algorithm described in a preceding paper [Appl. Opt. 43, 5669–5681 (2004)] is applied to measurement of the two-dimensional stoichiometry field in a combustible methane-air mixture by Raman imaging through a spectrograph. Stoichiometry (fuel equivalence ratio) is derived
Quantitative imaging through a spectrograph. 2. Stoichiometry mapping by Raman scattering.
Tolboom, R.A.L.; Dam, N.J.; Meulen, J.J. ter
2004-01-01
The Bayesian deconvolution algorithm described in a preceding paper [Appl. Opt. 43, 5669-5681 (2004)] is applied to measurement of the two-dimensional stoichiometry field in a combustible methane-air mixture by Raman imaging through a spectrograph. Stoichiometry (fuel equivalence ratio) is derived
International Nuclear Information System (INIS)
Aumeunier, Marie-Helene
2007-01-01
The SNAP (Supernovae/Acceleration Probe) project plans to measure very precisely the cosmological parameters and to determine the nature of dark energy by observations of type Ia supernovae and weak lensing. The SNAP instrument consists in a 2-meter telescope with a one square-degree imager and a spectrograph in the visible and infrared range. A dedicated optimized integral field spectrograph based on an imager slicer technology has been developed. To test and validate the performances, two approaches have been developed: a complete simulation of the complete instrument at the pixel level and the manufacturing and test of a spectrograph prototype operating at room temperature and in cryogenic environment. In this thesis we will test the optical and functional performances of the SNAP spectrograph: especially diffraction losses, stray-light and spectro-photometric calibration. We present an original approach for the spectro-photometric calibration adapted for the slicer and the optical performances resulting from the first measurement campaign in the visible range. (author) [fr
MOONS: a multi-object optical and near-infrared spectrograph for the VLT
Cirasuolo, M.; Afonso, J.; Bender, R.; Bonifacio, P.; Evans, C.; Kaper, L.; Oliva, Ernesto; Vanzi, Leonardo; Abreu, Manuel; Atad-Ettedgui, Eli; Babusiaux, Carine; Bauer, Franz E.; Best, Philip; Bezawada, Naidu; Bryson, Ian R.; Cabral, Alexandre; Caputi, Karina; Centrone, Mauro; Chemla, Fanny; Cimatti, Andrea; Cioni, Maria-Rosa; Clementini, Gisella; Coelho, João.; Daddi, Emanuele; Dunlop, James S.; Feltzing, Sofia; Ferguson, Annette; Flores, Hector; Fontana, Adriano; Fynbo, Johan; Garilli, Bianca; Glauser, Adrian M.; Guinouard, Isabelle; Hammer, Jean-François; Hastings, Peter R.; Hess, Hans-Joachim; Ivison, Rob J.; Jagourel, Pascal; Jarvis, Matt; Kauffman, G.; Lawrence, A.; Lee, D.; Li Causi, G.; Lilly, S.; Lorenzetti, D.; Maiolino, R.; Mannucci, F.; McLure, R.; Minniti, D.; Montgomery, D.; Muschielok, B.; Nandra, K.; Navarro, R.; Norberg, P.; Origlia, L.; Padilla, N.; Peacock, J.; Pedicini, F.; Pentericci, L.; Pragt, J.; Puech, M.; Randich, S.; Renzini, A.; Ryde, N.; Rodrigues, M.; Royer, F.; Saglia, R.; Sánchez, A.; Schnetler, H.; Sobral, D.; Speziali, R.; Todd, S.; Tolstoy, E.; Torres, M.; Venema, L.; Vitali, F.; Wegner, M.; Wells, M.; Wild, V.; Wright, G.
MOONS is a new conceptual design for a Multi-Object Optical and Near-infrared Spectrograph for the Very Large Telescope (VLT), selected by ESO for a Phase A study. The baseline design consists of ~1000 fibers deployable over a field of view of ~500 square arcmin, the largest patrol field offered by
Anomalous dynamics triggered by a non-convex equation of state in relativistic flows
Ibáñez, J. M.; Marquina, A.; Serna, S.; Aloy, M. A.
2018-05-01
The non-monotonicity of the local speed of sound in dense matter at baryon number densities much higher than the nuclear saturation density (n0 ≈ 0.16 fm-3) suggests the possible existence of a non-convex thermodynamics which will lead to a non-convex dynamics. Here, we explore the rich and complex dynamics that an equation of state (EoS) with non-convex regions in the pressure-density plane may develop as a result of genuinely relativistic effects, without a classical counterpart. To this end, we have introduced a phenomenological EoS, the parameters of which can be restricted owing to causality and thermodynamic stability constraints. This EoS can be regarded as a toy model with which we may mimic realistic (and far more complex) EoSs of practical use in the realm of relativistic hydrodynamics.
DEFF Research Database (Denmark)
Kafle, Bishoksan; Gallagher, John Patrick
2014-01-01
We present an approach to constrained Horn clause (CHC) verification combining three techniques: abstract interpretation over a domain of convex polyhedra, specialisation of the constraints in CHCs using abstract interpretation of query-answer transformed clauses, and refinement by splitting...... in conjunction with specialisation for propagating constraints it can frequently solve challenging verification problems. This is a contribution in itself, but refinement is needed when it fails, and the question of how to refine convex polyhedral analyses has not been studied much. We present a refinement...... technique based on interpolants derived from a counterexample trace; these are used to drive a property-based specialisation that splits predicates, leading in turn to more precise convex polyhedral analyses. The process of specialisation, analysis and splitting can be repeated, in a manner similar...
Uniform estimate of a compact convex set by a ball in an arbitrary norm
International Nuclear Information System (INIS)
Dudov, S I; Zlatorunskaya, I V
2000-01-01
The problem of the best uniform approximation of a compact convex set by a ball with respect to an arbitrary norm in the Hausdorff metric corresponding to that norm is considered. The question is reduced to a convex programming problem, which can be studied by means of convex analysis. Necessary and sufficient conditions for the solubility of this problem are obtained and several properties of its solution are described. It is proved, in particular, that the centre of at least one ball of best approximation lies in the compact set under consideration; in addition, conditions ensuring that the centres of all balls of best approximation lie in this compact set and a condition for unique solubility are obtained
Schur-Convexity for a Class of Symmetric Functions and Its Applications
Directory of Open Access Journals (Sweden)
Wei-Feng Xia
2009-01-01
Full Text Available For x=(x1,x2,…,xn∈R+n, the symmetric function ϕn(x,r is defined by ϕn(x,r=ϕn(x1,x2,…,xn;r=∏1≤i1
EVAPORATION FORM OF ICE CRYSTALS IN SUBSATURATED AIR AND THEIR EVAPORATION MECHANISM
ゴンダ, タケヒコ; セイ, タダノリ; Takehiko, GONDA; Tadanori, SEI
1987-01-01
The evaporation form and the evaporation mechanism of dendritic ice crystals grown in air of 1.0×(10)^5 Pa and at water saturation and polyhedral ice crystals grown in air of 4.0×10 Pa and at relatively low supersaturation are studied. In the case of dendritic ice crystals, the evaporation preferentially occurs in the convex parts of the crystal surfaces and in minute secondary branches. On the other hand, in the case of polyhedral ice crystals, the evaporation preferentially occurs in the pa...
A parallel Discrete Element Method to model collisions between non-convex particles
Directory of Open Access Journals (Sweden)
Rakotonirina Andriarimina Daniel
2017-01-01
Full Text Available In many dry granular and suspension flow configurations, particles can be highly non-spherical. It is now well established in the literature that particle shape affects the flow dynamics or the microstructure of the particles assembly in assorted ways as e.g. compacity of packed bed or heap, dilation under shear, resistance to shear, momentum transfer between translational and angular motions, ability to form arches and block the flow. In this talk, we suggest an accurate and efficient way to model collisions between particles of (almost arbitrary shape. For that purpose, we develop a Discrete Element Method (DEM combined with a soft particle contact model. The collision detection algorithm handles contacts between bodies of various shape and size. For nonconvex bodies, our strategy is based on decomposing a non-convex body into a set of convex ones. Therefore, our novel method can be called “glued-convex method” (in the sense clumping convex bodies together, as an extension of the popular “glued-spheres” method, and is implemented in our own granular dynamics code Grains3D. Since the whole problem is solved explicitly, our fully-MPI parallelized code Grains3D exhibits a very high scalability when dynamic load balancing is not required. In particular, simulations on up to a few thousands cores in configurations involving up to a few tens of millions of particles can readily be performed. We apply our enhanced numerical model to (i the collapse of a granular column made of convex particles and (i the microstructure of a heap of non-convex particles in a cylindrical reactor.
Optical design of a versatile FIRST high-resolution near-IR spectrograph
Zhao, Bo; Ge, Jian
2012-09-01
We report the update optical design of a versatile FIRST high resolution near IR spectrograph, which is called Florida IR Silicon immersion grating spectromeTer (FIRST). This spectrograph uses cross-dispersed echelle design with white pupils and also takes advantage of the image slicing to increase the spectra resolution, while maintaining the instrument throughput. It is an extremely high dispersion R1.4 (blazed angle of 54.74°) silicon immersion grating with a 49 mm diameter pupil is used as the main disperser at 1.4μm -1.8μm to produce R=72,000 while an R4 echelle with the same pupil diameter produces R=60,000 at 0.8μm -1.35μm. Two cryogenic Volume Phase Holographic (VPH) gratings are used as cross-dispersers to allow simultaneous wavelength coverage of 0.8μm -1.8μm. The butterfly mirrors and dichroic beamsplitters make a compact folding system to record these two wavelength bands with a 2kx2k H2RG array in a single exposure. By inserting a mirror before the grating disperser (the SIG and the echelle), this spectrograph becomes a very efficient integral field 3-D imaging spectrograph with R=2,000-4,000 at 0.8μm-1.8μm by coupling a 10x10 telescope fiber bundle with the spectrograph. Details about the optical design and performance are reported.
Method of convex rigid frames and applications in studies of multipartite quNit pure states
International Nuclear Information System (INIS)
Zhong Zaizhe
2005-01-01
In this letter, we suggest a method of convex rigid frames in the studies of multipartite quNit pure states. We illustrate what the convex rigid frames are, and what is their method. As applications, we use this method to solve some basic problems and give some new results (three theorems): the problem of the partial separability of the multipartite quNit pure states and its geometric explanation; the problem of the classification of multipartite quNit pure states, giving a perfect explanation of the local unitary transformations; thirdly, we discuss the invariants of classes and give a possible physical explanation. (letter to the editor)
Optimization of Transverse Oscillating Fields for Vector Velocity Estimation with Convex Arrays
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2013-01-01
A method for making Vector Flow Images using the transverse oscillation (TO) approach on a convex array is presented. The paper presents optimization schemes for TO fields for convex probes and evaluates their performance using Field II simulations and measurements using the SARUS experimental...... from 90 to 45 degrees in steps of 15 degrees. The optimization routine changes the lateral oscillation period lx to yield the best possible estimates based on the energy ratio between positive and negative spatial frequencies in the ultrasound field. The basic equation for lx gives 1.14 mm at 40 mm...
The canonical partial metric and the uniform convexity on normed spaces
Directory of Open Access Journals (Sweden)
S. Oltra
2005-10-01
Full Text Available In this paper we introduce the notion of canonical partial metric associated to a norm to study geometric properties of normed spaces. In particular, we characterize strict convexity and uniform convexity of normed spaces in terms of the canonical partial metric defined by its norm. We prove that these geometric properties can be considered, in this sense, as topological properties that appear when we compare the natural metric topology of the space with the non translation invariant topology induced by the canonical partial metric in the normed space.
License or entry decision for innovator in international duopoly with convex cost functions
Hattori, Masahiko; Tanaka, Yasuhito
2017-01-01
We consider a choice of options for a foreign innovating firm to license its new cost-reducing technology to a domestic incumbent firm or to enter the domestic market with or without license under convex cost functions. With convex cost functions the domestic market and the foreign market are not separated, and the results depend on the relative size of those markets. In a specific case with linear demand and quadratic cost, entry without license strategy is never the optimal strategy for the...
Crystal Nucleation Using Surface-Energy-Modified Glass Substrates.
Nordquist, Kyle A; Schaab, Kevin M; Sha, Jierui; Bond, Andrew H
2017-08-02
Systematic surface energy modifications to glass substrates can induce nucleation and improve crystallization outcomes for small molecule active pharmaceutical ingredients (APIs) and proteins. A comparatively broad probe for function is presented in which various APIs, proteins, organic solvents, aqueous media, surface energy motifs, crystallization methods, form factors, and flat and convex surface energy modifications were examined. Replicate studies ( n ≥ 6) have demonstrated an average reduction in crystallization onset times of 52(4)% (alternatively 52 ± 4%) for acetylsalicylic acid from 91% isopropyl alcohol using two very different techniques: bulk cooling to 0 °C using flat surface energy modifications or microdomain cooling to 4 °C from the interior of a glass capillary having convex surface energy modifications that were immersed in the solution. For thaumatin and bovine pancreatic trypsin, a 32(2)% reduction in crystallization onset times was demonstrated in vapor diffusion experiments ( n ≥ 15). Nucleation site arrays have been engineered onto form factors frequently used in crystallization screening, including microscope slides, vials, and 96- and 384-well high-throughput screening plates. Nucleation using surface energy modifications on the vessels that contain the solutes to be crystallized adds a layer of useful variables to crystallization studies without requiring significant changes to workflows or instrumentation.
Directory of Open Access Journals (Sweden)
Ghulam Farid
2017-10-01
Full Text Available The aim of this paper is to obtain some more general fractional integral inequalities of Fejer Hadamard type for p-convex functions via Riemann-Liouville k-fractional integrals. Also in particular fractional inequalities for p-convex functions via Riemann-Liouville fractional integrals have been deduced.
International Nuclear Information System (INIS)
Gosteva, T.S.; Zablotskaya, G.R.; Ivanov, B.A.; Kolyubakin, S.A.; Chernobrovin, V.I.
1975-01-01
Specific features of a magnetic spectrograph with a semicircular focusing are described; the spectrograph has been designed to study, using the REP-5 pulsed accelerator, the energy spectra of electrons with a current of 50 kA, pulse duration of 20 ns in the energy range 0.2 to 3 MeV. The beam has been transported in a drift chamber where the air pressure varies from 10 -3 to 40 torr. The chamber is 50 cm long and 12 cm in diameter. The spectrograph vacuum chamber is made in the form of a plane rectangular box with a degassing fitting. The uniform magnetic field in the spectrograph gap is provided with permanent magnets (ferrite-barium plates). The collimator and the chamber walls on which the magnets are located, are made of low-carbon electrotechnical steel. The diameters of the collimator entrance and exit windows are 2 and 0.2 mm, respectively. To screen the photofilm in the spectrograph chamber from x-radiation, there are three disks on the spectrograph flange on the part of the drift chamber, they are made of lead, steel, and aluminium. The steel disk, besides, screens the space in front of the collimator entrance window from the scattered magnetic field. During the experiments the pressure in the spectrograph chamber has varied from 7x10 -3 to 10 -1 torr. Electrons are registered using the RT-1 and RT-5 x-ray films 1x18 cm in size. The spectrograph described makes it possible to have well-resolved electron spectrum during a pulse. The electron spectra obtained by means of the spectrograph at a pressure of 4.10 -1 torr in the drift chamber and a charge voltage of 3.2 MV in the line, are shown [ru
DEFF Research Database (Denmark)
Wei, Lei; Khomtchenko, Elena; Alkeskjold, Thomas Tanggaard
2009-01-01
Thick photoresist coating for electrode patterning in an anisotropically etched V-groove is investigated for electrically controlled liquid crystal photonic bandgap fibre devices. The photoresist step coverage at the convex corners is compared with and without soft baking after photoresist spin...
Convexity of Energy-Like Functions: Theoretical Results and Applications to Power System Operations
Energy Technology Data Exchange (ETDEWEB)
Dvijotham, Krishnamurthy [California Inst. of Technology (CalTech), Pasadena, CA (United States); Low, Steven [California Inst. of Technology (CalTech), Pasadena, CA (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-01-12
Power systems are undergoing unprecedented transformations with increased adoption of renewables and distributed generation, as well as the adoption of demand response programs. All of these changes, while making the grid more responsive and potentially more efficient, pose significant challenges for power systems operators. Conventional operational paradigms are no longer sufficient as the power system may no longer have big dispatchable generators with sufficient positive and negative reserves. This increases the need for tools and algorithms that can efficiently predict safe regions of operation of the power system. In this paper, we study energy functions as a tool to design algorithms for various operational problems in power systems. These have a long history in power systems and have been primarily applied to transient stability problems. In this paper, we take a new look at power systems, focusing on an aspect that has previously received little attention: Convexity. We characterize the domain of voltage magnitudes and phases within which the energy function is convex in these variables. We show that this corresponds naturally with standard operational constraints imposed in power systems. We show that power of equations can be solved using this approach, as long as the solution lies within the convexity domain. We outline various desirable properties of solutions in the convexity domain and present simple numerical illustrations supporting our results.
Perimeter generating functions for the mean-squared radius of gyration of convex polygons
International Nuclear Information System (INIS)
Jensen, Iwan
2005-01-01
We have derived long series expansions for the perimeter generating functions of the radius of gyration of various polygons with a convexity constraint. Using the series we numerically find simple (algebraic) exact solutions for the generating functions. In all cases the size exponent ν 1. (letter to the editor)
A Convex Variational Model for Restoring Blurred Images with Multiplicative Noise
DEFF Research Database (Denmark)
Dong, Yiqiu; Tieyong Zeng
2013-01-01
In this paper, a new variational model for restoring blurred images with multiplicative noise is proposed. Based on the statistical property of the noise, a quadratic penalty function technique is utilized in order to obtain a strictly convex model under a mild condition, which guarantees...
Modeling IrisCode and its variants as convex polyhedral cones and its security implications.
Kong, Adams Wai-Kin
2013-03-01
IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods.
Unifying kinetic approach to phoretic forces and torques onto moving and rotating convex particles
Kröger, M.; Hütter, M.
2006-01-01
We derive general expressions and present several examples for the phoretic forces and torques acting on a translationally moving and rotating convex tracer particle, usually a submicrosized aerosol particle, assumed to be small compared to the mean free path of the surrounding nonequilibrium gas.
Rooij, van I.; Stege, U.; Schactman, A.
2003-01-01
Recently there has been growing interest among psychologists in human performance on the Euclidean traveling salesperson problem (E-TSP). A debate has been initiated on what strategy people use in solving visually presented E-TSP instances. The most prominent hypothesis is the convex-hull
On the convex hull of the simple integer recourse objective function
Klein Haneveld, Willem K.; Stougie, L.; van der Vlerk, Maarten H.
1995-01-01
We consider the objective function of a simple integer recourse problem with fixed technology matrix. Using properties of the expected value function, we prove a relation between the convex hull of this function and the expected value function of a continuous simple recourse program. We present an
On evolving deformation microstructures in non-convex partially damaged solids
Gurses, Ercan; Miehe, Christian
2011-01-01
. These microstructures can be resolved by use of relaxation techniques associated with the construction of convex hulls. We propose a particular relaxation method for partially damaged solids and investigate it in one- and multi-dimensional settings. To this end, we
Convex Bodies With Minimal Volume Product in R^2 --- A New Proof
Lin, Youjiang
2010-01-01
In this paper, a new proof of the following result is given: The product of the volumes of an origin symmetric convex bodies $K$ in R^2 and of its polar body is minimal if and only if $K$ is a parallelogram.
Multiobjective optimization of classifiers by means of 3D convex-hull-based evolutionary algorithms
Zhao, J.; Basto, Fernandes V.; Jiao, L.; Yevseyeva, I.; Asep, Maulana A.; Li, R.; Bäck, T.H.W.; Tang, T.; Michael, Emmerich T. M.
2016-01-01
The receiver operating characteristic (ROC) and detection error tradeoff(DET) curves are frequently used in the machine learning community to analyze the performance of binary classifiers. Recently, the convex-hull-based multiobjective genetic programming algorithm was proposed and successfully
Deformation patterning driven by rate dependent non-convex strain gradient plasticity
Yalcinkaya, T.; Brekelmans, W.A.M.; Geers, M.G.D.
2011-01-01
A rate dependent strain gradient plasticity framework for the description of plastic slip patterning in a system with non-convex energetic hardening is presented. Both the displacement and the plastic slip fields are considered as primary variables. These fields are determined on a global level by
Parthood and Convexity as the Basic Notions of a Theory of Space
DEFF Research Database (Denmark)
Robering, Klaus
A deductive system of geometry is presented which is based on atomistic mereology ("mereology with points'') and the notion of convexity. The system is formulated in a liberal many-sorted logic which makes use of class-theoretic notions without however adopting any comprehension axioms. The geome...
Primal Recovery from Consensus-Based Dual Decomposition for Distributed Convex Optimization
Simonetto, A.; Jamali-Rad, H.
2015-01-01
Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the constraints are coupled, the dual decomposition scheme involves local parallel
Directory of Open Access Journals (Sweden)
Xuewen Mu
2015-01-01
quadratic programming over second-order cones and a bounded set. At each iteration, we only need to compute the metric projection onto the second-order cones and the projection onto the bound set. The result of convergence is given. Numerical results demonstrate that our method is efficient for the convex quadratic second-order cone programming problems with bounded constraints.
A DEEP CUT ELLIPSOID ALGORITHM FOR CONVEX-PROGRAMMING - THEORY AND APPLICATIONS
FRENK, JBG; GROMICHO, J; ZHANG, S
1994-01-01
This paper proposes a deep cut version of the ellipsoid algorithm for solving a general class of continuous convex programming problems. In each step the algorithm does not require more computational effort to construct these deep cuts than its corresponding central cut version. Rules that prevent
On the rank 1 convexity of stored energy functions of physically linear stress-strain relations
Czech Academy of Sciences Publication Activity Database
Šilhavý, Miroslav; Bertram, A.; Böhlke, T.
2007-01-01
Roč. 86, č. 3 (2007), s. 235-243 ISSN 0374-3535 Institutional research plan: CEZ:AV0Z10190503 Keywords : generalized linear elastic law s * generalized strain measures * rank 1 convexity Subject RIV: BA - General Mathematics Impact factor: 0.743, year: 2007
Directory of Open Access Journals (Sweden)
Mengkun Zhu
2015-01-01
Full Text Available Some sharp estimates of coefficients, distortion, and growth for harmonic mappings with analytic parts convex or starlike functions of order β are obtained. We also give area estimates and covering theorems. Our main results generalise those of Klimek and Michalski.
On the Monotonicity and Log-Convexity of a Four-Parameter Homogeneous Mean
Directory of Open Access Journals (Sweden)
Yang Zhen-Hang
2008-01-01
Full Text Available Abstract A four-parameter homogeneous mean is defined by another approach. The criterion of its monotonicity and logarithmically convexity is presented, and three refined chains of inequalities for two-parameter mean values are deduced which contain many new and classical inequalities for means.
Convex order approximations in case of cash flows of mixed signs
Dhaene, J.; Goovaerts, M.J.; Vanmaele, M.; van Weert, K.
2012-01-01
In Van Weert et al. (2010), results are obtained showing that, when allowing some of the cash flows to be negative, convex order lower bound approximations can still be used to solve general investment problems in a context of provisioning or terminal wealth. In this paper, a correction and further
From a Nonlinear, Nonconvex Variational Problem to a Linear, Convex Formulation
International Nuclear Information System (INIS)
Egozcue, J.; Meziat, R.; Pedregal, P.
2002-01-01
We propose a general approach to deal with nonlinear, nonconvex variational problems based on a reformulation of the problem resulting in an optimization problem with linear cost functional and convex constraints. As a first step we explicitly explore these ideas to some one-dimensional variational problems and obtain specific conclusions of an analytical and numerical nature
Headache as a crucial symptom in the etiology of convexal subarachnoid hemorrhage.
Rico, María; Benavente, Lorena; Para, Marta; Santamarta, Elena; Pascual, Julio; Calleja, Sergio
2014-03-01
Convexal subarachnoid hemorrhage has been associated with different diseases, reversible cerebral vasoconstriction syndrome and cerebral amyloid angiopathy being the 2 main causes. To investigate whether headache at onset is determinant in identifying the underlying etiology for convexal subarachnoid hemorrhage. After searching in the database of our hospital, 24 patients were found with convexal subarachnoid hemorrhage in the last 10 years. The mean age of the sample was 69.5 years. We recorded data referring to demographics, symptoms and neuroimaging. Cerebral amyloid angiopathy patients accounted for 46% of the sample, 13% were diagnosed with reversible cerebral vasoconstriction syndrome, 16% with several other etiologies, and in 25%, the cause remained unknown. Mild headache was present only in 1 (9%) of the 11 cerebral amyloid angiopathy patients, while severe headache was the dominant feature in 86% of cases of the remaining etiologies. Headache is a key symptom allowing a presumptive etiological diagnosis of convexal subarachnoid hemorrhage. While the absence of headache suggests cerebral amyloid angiopathy as the more probable cause, severe headache obliges us to rule out other etiologies, such as reversible cerebral vasoconstriction syndrome. © 2013 American Headache Society.
Neuro-genetic hybrid approach for the solution of non-convex economic dispatch problem
International Nuclear Information System (INIS)
Malik, T.N.; Asar, A.U.
2009-01-01
ED (Economic Dispatch) is non-convex constrained optimization problem, and is used for both on line and offline studies in power system operation. Conventionally, it is solved as convex problem using optimization techniques by approximating generator input/output characteristic. Curves of monotonically increasing nature thus resulting in an inaccurate dispatch. The GA (Genetic Algorithm) has been used for the solution of this problem owing to its inherent ability to address the convex and non-convex problems equally. This approach brings the solution to the global minimum region of search space in a short time and then takes longer time to converge to near optimal results. GA based hybrid approaches are used to fine tune the near optimal results produced by GA. This paper proposes NGH (Neuro Genetic Hybrid) approach to solve the economic dispatch with valve point effect. The proposed approach combines the GA with the ANN (Artificial Neural Network) using SI (Swarm Intelligence) learning rule. The GA acts as a global optimizer and the neural network fine tunes the GA results to the desired targets. Three machines standard test system has been tested for validation of the approach. Comparing the results with GA and NGH model based on back-propagation learning, the proposed approach gives contrast improvements showing the promise of the approach. (author)
Extreme points of the convex set of joint probability distributions with ...
Indian Academy of Sciences (India)
Here we address the following problem: If G is a standard ... convex set of all joint probability distributions on the product Borel space (X1 ×X2, F1 ⊗. F2) which .... cannot be identically zero when X and Y vary in A1 and u and v vary in H2. Thus.
Mean-square performance of a convex combination of two adaptive filters
DEFF Research Database (Denmark)
Garcia, Jeronimo; Figueiras-Vidal, A.R.; Sayed, A.H.
2006-01-01
Combination approaches provide an interesting way to improve adaptive filter performance. In this paper, we study the mean-square performance of a convex combination of two transversal filters. The individual filters are independently adapted using their own error signals, while the combination i...
Directory of Open Access Journals (Sweden)
San-Yang Liu
2014-01-01
Full Text Available Two unified frameworks of some sufficient descent conjugate gradient methods are considered. Combined with the hyperplane projection method of Solodov and Svaiter, they are extended to solve convex constrained nonlinear monotone equations. Their global convergence is proven under some mild conditions. Numerical results illustrate that these methods are efficient and can be applied to solve large-scale nonsmooth equations.
Robust Nearfield Wideband Beamforming Design Based on Adaptive-Weighted Convex Optimization
Directory of Open Access Journals (Sweden)
Guo Ye-Cai
2017-01-01
Full Text Available Nearfield wideband beamformers for microphone arrays have wide applications in multichannel speech enhancement. The nearfield wideband beamformer design based on convex optimization is one of the typical representatives of robust approaches. However, in this approach, the coefficient of convex optimization is a constant, which has not used all the freedom provided by the weighting coefficient efficiently. Therefore, it is still necessary to further improve the performance. To solve this problem, we developed a robust nearfield wideband beamformer design approach based on adaptive-weighted convex optimization. The proposed approach defines an adaptive-weighted function by the adaptive array signal processing theory and adjusts its value flexibly, which has improved the beamforming performance. During each process of the adaptive updating of the weighting function, the convex optimization problem can be formulated as a SOCP (Second-Order Cone Program problem, which could be solved efficiently using the well-established interior-point methods. This method is suitable for the case where the sound source is in the nearfield range, can work well in the presence of microphone mismatches, and is applicable to arbitrary array geometries. Several design examples are presented to verify the effectiveness of the proposed approach and the correctness of the theoretical analysis.
A Deep Cut Ellipsoid Algorithm for convex Programming: theory and Applications
Frenk, J.B.G.; Gromicho Dos Santos, J.A.; Zhang, S.
1994-01-01
This paper proposes a deep cut version of the ellipsoid algorithm for solving a general class of continuous convex programming problems. In each step the algorithm does not require more computational effort to construct these deep cuts than its corresponding central cut version. Rules that prevent
A deep cut ellipsoid algorithm for convex programming : Theory and applications
J.B.G. Frenk (Hans); J.A.S. Gromicho (Joaquim); S. Zhang (Shuzhong)
1994-01-01
textabstractThis paper proposes a deep cut version of the ellipsoid algorithm for solving a general class of continuous convex programming problems. In each step the algorithm does not require more computational effort to construct these deep cuts than its corresponding central cut version. Rules
Study on feed forward neural network convex optimization for LiFePO4 battery parameters
Liu, Xuepeng; Zhao, Dongmei
2017-08-01
Based on the modern facility agriculture automatic walking equipment LiFePO4 Battery, the parameter identification of LiFePO4 Battery is analyzed. An improved method for the process model of li battery is proposed, and the on-line estimation algorithm is presented. The parameters of the battery are identified using feed forward network neural convex optimization algorithm.
An Efficient Algorithm to Calculate the Minkowski Sum of Convex 3D Polyhedra
Bekker, Henk; Roerdink, Jos B.T.M.
2001-01-01
A new method is presented to calculate the Minkowski sum of two convex polyhedra A and B in 3D. These graphs are given edge attributes. From these attributed graphs the attributed graph of the Minkowski sum is constructed. This graph is then transformed into the Minkowski sum of A and B. The running
The Lp Lp Lp-curvature images of convex bodies and Lp Lp Lp ...
Indian Academy of Sciences (India)
Associated with the -curvature image defined by Lutwak, some inequalities for extended mixed -affine surface areas of convex bodies and the support functions of -projection bodies are established. As a natural extension of a result due to Lutwak, an -type affine isoperimetric inequality, whose special cases are ...
On the Fermat-Lagrange principle for mixed smooth convex extremal problems
International Nuclear Information System (INIS)
Brinkhuis, Ya
2001-01-01
A simple geometric condition that can be attached to an extremal problem of a fairly general form included in a family of problems is indicated. This is used to demonstrate that the task of formulating a uniform condition for smooth convex problems can be satisfactorily accomplished. On the other hand, the necessity of this new condition of optimality is proved under certain technical assumptions
X-ray streak crystal spectography
International Nuclear Information System (INIS)
Kauffman, R.L.; Brown, T.; Medecki, H.
1983-01-01
We have built an x-ray streaked crystal spectrograph for making time-resolved x-ray spectral measurements. This instrument can access Bragg angles from 11 0 to 38 0 and x-ray spectra from 200 eV to greater than 10 keV. We have demonstrated resolving powers, E/δE > 200 at 1 keV and time resolution less than 20 psec. A description of the instrument and an example of the data is given
A new corrective technique for adolescent idiopathic scoliosis (Ucar′s convex rod rotation
Directory of Open Access Journals (Sweden)
Bekir Yavuz Ucar
2014-01-01
Full Text Available Study Design: Prospective single-center study. Objective: To analyze the efficacy and safety of a new technique of global vertebral correction with convex rod rotation performed on the patients with adolescent idiopathic scoliosis. Summary of Background Data: Surgical goal is to obtain an optimal curve correction in scoliosis surgery. There are various correction techniques. This report describes a new technique of global vertebral correction with convex rod rotation. Materials and Methods: A total of 12 consecutive patients with Lenke type I adolescent idiopathic scoliosis and managed by convex rod rotation technique between years 2012 and 2013 having more than 1 year follow-up were included. Mean age was 14.5 (range = 13-17 years years at the time of operation. The hospital charts were reviewed for demographic data. Measurements of curve magnitude and balance were made on 36-inch standing anteroposterior and lateral radiographs taken before surgery and at most recent follow up to assess deformity correction, spinal balance, and complications related to the instrumentation. Results: Preoperative coronal plane major curve of 62° (range = 50°-72° with flexibility of less than 30% was corrected to 11.5°(range = 10°-14° showing a 81% scoliosis correction at the final follow-up. Coronal imbalance was improved 72% at the most recent follow-up assessment. No complications were found. Conclusion: The new technique of global vertebral correction with Ucar′s convex rod rotation is an effective technique. This method is a vertebral rotation procedure from convex side and it allows to put screws easily to the concave side.
E parallel B energy-mass spectrograph for measurement of ions and neutral atoms
International Nuclear Information System (INIS)
Funsten, H.O.; McComas, D.J.; Scime, E.E.
1997-01-01
Real-time measurement of plasma composition and energy is an important diagnostic in fusion experiments. The Thomson parabola spectrograph described here utilizes an electric field parallel to a magnetic field (E parallel B) and a two-dimensional imaging detector to uniquely identify the energy-per-charge and mass-per-charge distributions of plasma ions. An ultrathin foil can be inserted in front of the E parallel B filter to convert neutral atoms to ions, which are subsequently analyzed using the E parallel B filter. Since helium exiting an ultrathin foil does not form a negative ion and hydrogen isotopes do, this spectrograph allows unique identification of tritium ions and neutrals even in the presence of a large background of 3 He. copyright 1997 American Institute of Physics
Using an integral-field unit spectrograph to study radical species in cometary coma
Lewis, Benjamin; Pierce, Donna M.; Vaughan, Charles M.; Cochran, Anita
2015-01-01
We have observed several comets using an integral-field unit spectrograph (the George and Cynthia Mitchell Spectrograph) on the 2.7m Harlan J. Smith telescope at McDonald Observatory. Full-coma spectroscopic images were obtained for various radical species (C2, C3, CN, NH2). Various coma enhancements were used to identify and characterize coma morphological features. The azimuthal average profiles and the Haser model were used to determine production rates and possible parent molecules. Here, we present the work completed to date, and we compare our results to other comet taxonomic surveys. This work was funded by the National Science Foundation Graduate K-12 (GK-12) STEM Fellows program (Award No. DGE-0947419), NASA's Planetary Atmospheres program (Award No. NNX14AH18G), and the Fund for Astrophysical Research, Inc.
Rocket studies of solar corona and transition region. [X-Ray spectrometer/spectrograph telescope
Acton, L. W.; Bruner, E. C., Jr.; Brown, W. A.; Nobles, R. A.
1979-01-01
The XSST (X-Ray Spectrometer/Spectrograph Telescope) rocket payload launched by a Nike Boosted Black Brant was designed to provide high spectral resolution coronal soft X-ray line information on a spectrographic plate, as well as time resolved photo-electric records of pre-selected lines and spectral regions. This spectral data is obtained from a 1 x 10 arc second solar region defined by the paraboloidal telescope of the XSST. The transition region camera provided full disc images in selected spectral intervals originating in lower temperature zones than the emitting regions accessible to the XSST. A H-alpha camera system allowed referencing the measurements to the chromospheric temperatures and altitudes. Payload flight and recovery information is provided along with X-ray photoelectric and UV flight data, transition camera results and a summary of the anomalies encountered. Instrument mechanical stability and spectrometer pointing direction are also examined.
Energy Technology Data Exchange (ETDEWEB)
Capdevila, C; Roca, M
1966-07-01
A spectrographic method was developed to determine 23 elements in a wide range of concentrations; the method can be applied to metallic or refractory samples. Previous melting with lithium tetraborate and germanium oxide is done in order to avoid the influence of matrix composition and crystalline structure. Germanium oxide is also employed as internal standard. The resulting beads ar mixed with graphite powder (1:1) and excited in a 10 amperes direct current arc. (Author) 12 refs.
The influence of calcium magnesium, and sodium on the spectrographic analysis of natural waters
International Nuclear Information System (INIS)
Diaz Guerra, J. P.; Capdevilla, C.
1969-01-01
The influences of 1000 μg/ml of calcium and sodium and 300 μg/ml of magnesium, on the spectrographic determination of Al, Ba, Cr, Fe, Li , Mn, Ni, Pb, Sr and Ti, minor constituents in natural waters, have been studied, In order to eliminate them, the elements Ga, In, La, Ti and Zn, as well as a mixture containing 30 % Tl-70 % In, have been tested as spectrochemical buffers. (Author) 7 refs
Spectrographic observations of solar microwave bursts in the 5.3-7.4 GHz range
International Nuclear Information System (INIS)
Kaverin, N.S.; Korshunov, A.I.; Shushunov, V.V.; Aurass, H.; Detlefs, H.; Hartmann, H.; Krueger, A.; Kurths, J.
1983-01-01
The first results of the Gorky-type microwave spectrograph of Tremsdorf solar radioastronomy observatory are given, observed after the reconstruction of the instrument to get a higher time resolution for the spectral observations. Two 5.3-7.4 GHz microwave burst spectral diagrams are shown having 20 s time resolution. Broad-bond spectral structures of the microwave burst development have been observed. Explanation of a 'pseudo-drift' phenomenon due to individual peaks is given. (D.Gy.)
First observations from a CCD all-sky spectrograph at Barentsburg (Spitsbergen
Directory of Open Access Journals (Sweden)
S. A. Chernouss
2008-05-01
Full Text Available A digital CCD all-sky spectrograph was made by the Polar Geophysical Institute (PGI to support IPY activity in auroral research. The device was tested at the Barentsburg observatory of PGI during the winter season of 2005–2006. The spectrograph is based on a cooled CCD and a transmission grating. The main features of this spectrograph are: a wide field of view (~180°, a wide spectral range (380–740 nm, a spectral resolution of 0.6 nm, a background level of about 100 R at 1-min exposure time. Several thousand spectra of nightglow and aurora were recorded during the observation season. It was possible to register both the strong auroral emissions, as well as weak ones. Spectra of aurora, including nitrogen and oxygen molecular and atomic emissions, as well as OH emissions of the nightglow are shown. A comparison has been conducted of auroral spectra obtained by the film all-sky spectral camera C-180-S at Spitsbergen during IGY, with spectra obtained at Barentsburg during the last winter season. The relationship between the red (630.0 nm and green (557.7 nm auroral emissions shows that the green emission is dominant near the minimum of the solar cycle activity (2005–2006. The opposite situation is observed during 1958–1959, with a maximum solar cycle activity.
Opto-mechanical design of an image slicer for the GRIS spectrograph at GREGOR
Vega Reyes, N.; Esteves, M. A.; Sánchez-Capuchino, J.; Salaun, Y.; López, R. L.; Gracia, F.; Estrada Herrera, P.; Grivel, C.; Vaz Cedillo, J. J.; Collados, M.
2016-07-01
An image slicer has been proposed for the Integral Field Spectrograph [1] of the 4-m European Solar Telescope (EST) [2] The image slicer for EST is called MuSICa (Multi-Slit Image slicer based on collimator-Camera) [3] and it is a telecentric system with diffraction limited optical quality offering the possibility to obtain high resolution Integral Field Solar Spectroscopy or Spectro-polarimetry by coupling a polarimeter after the generated slit (or slits). Considering the technical complexity of the proposed Integral Field Unit (IFU), a prototype has been designed for the GRIS spectrograph at GREGOR telescope at Teide Observatory (Tenerife), composed by the optical elements of the image slicer itself, a scanning system (to cover a larger field of view with sequential adjacent measurements) and an appropriate re-imaging system. All these subsystems are placed in a bench, specially designed to facilitate their alignment, integration and verification, and their easy installation in front of the spectrograph. This communication describes the opto-mechanical solution adopted to upgrade GRIS while ensuring repeatability between the observational modes, IFU and long-slit. Results from several tests which have been performed to validate the opto-mechanical prototypes are also presented.
Directory of Open Access Journals (Sweden)
Hassani Kamran
2011-05-01
Full Text Available Abstract Background Although cardiac auscultation remains important to detect abnormal sounds and murmurs indicative of cardiac pathology, the application of electronic methods remains seldom used in everyday clinical practice. In this report we provide preliminary data showing how the phonocardiogram can be analyzed using color spectrographic techniques and discuss how such information may be of future value for noninvasive cardiac monitoring. Methods We digitally recorded the phonocardiogram using a high-speed USB interface and the program Gold Wave http://www.goldwave.com in 55 infants and adults with cardiac structural disease as well as from normal individuals and individuals with innocent murmurs. Color spectrographic analysis of the signal was performed using Spectrogram (Version 16 as a well as custom MATLAB code. Results Our preliminary data is presented as a series of seven cases. Conclusions We expect the application of spectrographic techniques to phonocardiography to grow substantially as ongoing research demonstrates its utility in various clinical settings. Our evaluation of a simple, low-cost phonocardiographic recording and analysis system to assist in determining the characteristic features of heart murmurs shows promise in helping distinguish innocent systolic murmurs from pathological murmurs in children and is expected to useful in other clinical settings as well.
Development of micro-mirror slicer integral field unit for space-borne solar spectrographs
Suematsu, Yoshinori; Saito, Kosuke; Koyama, Masatsugu; Enokida, Yukiya; Okura, Yukinobu; Nakayasu, Tomoyasu; Sukegawa, Takashi
2017-12-01
We present an innovative optical design for image slicer integral field unit (IFU) and a manufacturing method that overcomes optical limitations of metallic mirrors. Our IFU consists of a micro-mirror slicer of 45 arrayed, highly narrow, flat metallic mirrors and a pseudo-pupil-mirror array of off-axis conic aspheres forming three pseudo slits of re-arranged slicer images. A prototype IFU demonstrates that the final optical quality is sufficiently high for a visible light spectrograph. Each slicer micro-mirror is 1.58 mm long and 30 μm wide with surface roughness ≤1 nm rms, and edge sharpness ≤ 0.1 μm, etc. This IFU is small size and can be implemented in a multi-slit spectrograph without any moving mechanism and fore optics, in which one slit is real and the others are pseudo slits from the IFU. The IFU mirrors were deposited by a space-qualified, protected silver coating for high reflectivity in visible and near IR wavelength regions. These properties are well suitable for space-borne spectrograph such as the future Japanese solar space mission SOLAR-C. We present the optical design, performance of prototype IFU, and space qualification tests of the silver coating.
Mass production of volume phase holographic gratings for the VIRUS spectrograph array
Chonis, Taylor S.; Frantz, Amy; Hill, Gary J.; Clemens, J. Christopher; Lee, Hanshin; Tuttle, Sarah E.; Adams, Joshua J.; Marshall, J. L.; DePoy, D. L.; Prochaska, Travis
2014-07-01
The Visible Integral-field Replicable Unit Spectrograph (VIRUS) is a baseline array of 150 copies of a simple, fiber-fed integral field spectrograph that will be deployed on the Hobby-Eberly Telescope (HET). VIRUS is the first optical astronomical instrument to be replicated on an industrial scale, and represents a relatively inexpensive solution for carrying out large-area spectroscopic surveys, such as the HET Dark Energy Experiment (HETDEX). Each spectrograph contains a volume phase holographic (VPH) grating with a 138 mm diameter clear aperture as its dispersing element. The instrument utilizes the grating in first-order for 350 VPH gratings has been mass produced for VIRUS. Here, we present the design of the VIRUS VPH gratings and a discussion of their mass production. We additionally present the design and functionality of a custom apparatus that has been used to rapidly test the first-order diffraction efficiency of the gratings for various discrete wavelengths within the VIRUS spectral range. This device has been used to perform both in-situ tests to monitor the effects of adjustments to the production prescription as well as to carry out the final acceptance tests of the gratings' diffraction efficiency. Finally, we present the as-built performance results for the entire suite of VPH gratings.
Laboratory Testing and Performance Verification of the CHARIS Integral Field Spectrograph
Groff, Tyler D.; Chilcote, Jeffrey; Kasdin, N. Jeremy; Galvin, Michael; Loomis, Craig; Carr, Michael A.; Brandt, Timothy; Knapp, Gillian; Limbach, Mary Anne; Guyon, Olivier;
2016-01-01
The Coronagraphic High Angular Resolution Imaging Spectrograph (CHARIS) is an integral field spectrograph (IFS) that has been built for the Subaru telescope. CHARIS has two imaging modes; the high-resolution mode is R82, R69, and R82 in J, H, and K bands respectively while the low-resolution discovery mode uses a second low-resolution prism with R19 spanning 1.15-2.37 microns (J+H+K bands). The discovery mode is meant to augment the low inner working angle of the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) adaptive optics system, which feeds CHARIS a coronagraphic image. The goal is to detect and characterize brown dwarfs and hot Jovian planets down to contrasts five orders of magnitude dimmer than their parent star at an inner working angle as low as 80 milliarcseconds. CHARIS constrains spectral crosstalk through several key aspects of the optical design. Additionally, the repeatability of alignment of certain optical components is critical to the calibrations required for the data pipeline. Specifically the relative alignment of the lens let array, prism, and detector must be highly stable and repeatable between imaging modes. We report on the measured repeatability and stability of these mechanisms, measurements of spectral crosstalk in the instrument, and the propagation of these errors through the data pipeline. Another key design feature of CHARIS is the prism, which pairs Barium Fluoride with Ohara L-BBH2 high index glass. The dispersion of the prism is significantly more uniform than other glass choices, and the CHARIS prisms represent the first NIR astronomical instrument that uses L-BBH2as the high index material. This material choice was key to the utility of the discovery mode, so significant efforts were put into cryogenic characterization of the material. The final performance of the prism assemblies in their operating environment is described in detail. The spectrograph is going through final alignment, cryogenic cycling, and is being
DEFF Research Database (Denmark)
Christensen, Claus H.; Schmidt, I.; Carlsson, A.
2005-01-01
A major factor governing the performance of catalytically active particles supported on a zeolite carrier is the degree of dispersion. It is shown that the introduction of noncrystallographic mesopores into zeolite single crystals (silicalite-1, ZSM-5) may increase the degree of particle dispersion....... As representative examples, a metal (Pt), an alloy (PtSn), and a metal carbide (beta-Mo2C) were supported on conventional and mesoporous zeolite carriers, respectively, and the degree of particle dispersion was compared by TEM imaging. On conventional zeolites, the supported material aggregated on the outer surface...
Energy Technology Data Exchange (ETDEWEB)
Land, T A; Dylla-Spears, R; Thorsness, C B
2006-08-29
Large dihydrogen phosphate (KDP) crystals are grown in large crystallizers to provide raw material for the manufacture of optical components for large laser systems. It is a challenge to grow crystal with sufficient mass and geometric properties to allow large optical plates to be cut from them. In addition, KDP has long been the canonical solution crystal for study of growth processes. To assist in the production of the crystals and the understanding of crystal growth phenomena, analysis of growth habits of large KDP crystals has been studied, small scale kinetic experiments have been performed, mass transfer rates in model systems have been measured, and computational-fluid-mechanics tools have been used to develop an engineering model of the crystal growth process. The model has been tested by looking at its ability to simulate the growth of nine KDP boules that all weighed more than 200 kg.
Indian Academy of Sciences (India)
2018-05-18
May 18, 2018 ... Abstract. 4-Nitrobenzoic acid (4-NBA) single crystals were studied for their linear and nonlinear optical ... studies on the proper growth, linear and nonlinear optical ..... between the optic axes and optic sign of the biaxial crystal.
Schomaker, Verner; Lingafelter, E. C.
1985-01-01
Discusses characteristics of crystal systems, comparing (in table format) crystal systems with lattice types, number of restrictions, nature of the restrictions, and other lattices that can accidently show the same metrical symmetry. (JN)
International Nuclear Information System (INIS)
Vu Ngoc Phat; Jong Yeoul Park
1995-10-01
The paper studies a class of set-values operators with emphasis on properties of their adjoints and existence of eigenvalues and eigenvectors of infinite-dimensional convex closed set-valued operators. Sufficient conditions for existence of eigenvalues and eigenvectors of set-valued convex closed operators are derived. These conditions specify possible features of control problems. The results are applied to some constrained control problems of infinite-dimensional systems described by discrete-time inclusions whose right-hand-sides are convex closed set- valued functions. (author). 8 refs
Craft, David
2010-10-01
A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A Genealogy of Convex Solids Via Local and Global Bifurcations of Gradient Vector Fields
Domokos, Gábor; Holmes, Philip; Lángi, Zsolt
2016-12-01
Three-dimensional convex bodies can be classified in terms of the number and stability types of critical points on which they can balance at rest on a horizontal plane. For typical bodies, these are non-degenerate maxima, minima, and saddle points, the numbers of which provide a primary classification. Secondary and tertiary classifications use graphs to describe orbits connecting these critical points in the gradient vector field associated with each body. In previous work, it was shown that these classifications are complete in that no class is empty. Here, we construct 1- and 2-parameter families of convex bodies connecting members of adjacent primary and secondary classes and show that transitions between them can be realized by codimension 1 saddle-node and saddle-saddle (heteroclinic) bifurcations in the gradient vector fields. Our results indicate that all combinatorially possible transitions can be realized in physical shape evolution processes, e.g., by abrasion of sedimentary particles.
International Nuclear Information System (INIS)
Alabau-Boussouira, Fatiha
2005-01-01
This work is concerned with the stabilization of hyperbolic systems by a nonlinear feedback which can be localized on a part of the boundary or locally distributed. We show that general weighted integral inequalities together with convexity arguments allow us to produce a general semi-explicit formula which leads to decay rates of the energy in terms of the behavior of the nonlinear feedback close to the origin. This formula allows us to unify for instance the cases where the feedback has a polynomial growth at the origin, with the cases where it goes exponentially fast to zero at the origin. We also give three other significant examples of nonpolynomial growth at the origin. We also prove the optimality of our results for the one-dimensional wave equation with nonlinear boundary dissipation. The key property for obtaining our general energy decay formula is the understanding between convexity properties of an explicit function connected to the feedback and the dissipation of energy
Measurement of laser welding pool geometry using a closed convex active contour model
International Nuclear Information System (INIS)
Zheng, Rui; Zhang, Pu; Duan, Aiqing; Xiao, Peng
2014-01-01
The purpose of this study was to develop a computer vision method to measure geometric parameters of the weld pool in a deep penetration CO 2 laser welding system. Accurate measurement was achieved by removing a huge amount of interference caused by spatter, arc light and plasma to extract the true weld pool contour. This paper introduces a closed convex active contour (CCAC) model derived from the active contour model (snake model), which is a more robust high-level vision method than the traditional low-level vision methods. We made an improvement by integrating an active contour with the information that the weld pool contour is almost a closed convex curve. An effective thresholding method and an improved greedy algorithm are also given to complement the CCAC model. These influences can be effectively removed by using the CCAC model to acquire and measure the weld pool contour accurately and relatively fast. (paper)
Parameter sensitivity study of a Field II multilayer transducer model on a convex transducer
DEFF Research Database (Denmark)
Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten
2009-01-01
A multilayer transducer model for predicting a transducer impulse response has in earlier works been developed and combined with the Field II software. This development was tested on current, voltage, and intensity measurements on piezoceramics discs (Bæk et al. IUS 2008) and a convex 128 element...... ultrasound imaging transducer (Bæk et al. ICU 2009). The model benefits from its 1D simplicity and hasshown to give an amplitude error around 1.7‐2 dB. However, any prediction of amplitude, phase, and attenuation of pulses relies on the accuracy of manufacturer supplied material characteristics, which may...... is a quantitative calibrated model for a complete ultrasound system. This includes a sensitivity study aspresented here.Statement of Contribution/MethodsThe study alters 35 different model parameters which describe a 128 element convex transducer from BK Medical Aps. The changes are within ±20 % of the values...
Convex relaxation of Optimal Power Flow in Distribution Feeders with embedded solar power
DEFF Research Database (Denmark)
Hermann, Alexander Niels August; Wu, Qiuwei; Huang, Shaojun
2016-01-01
There is an increasing interest in using Distributed Energy Resources (DER) directly coupled to end user distribution feeders. This poses an array of challenges because most of today’s distribution feeders are designed for unidirectional power flow. Therefore when installing DERs such as solar...... panels with uncontrolled inverters, the upper limit of installable capacity is quickly reached in many of today’s distribution feeders. This problem can often be mitigated by optimally controlling the voltage angles of inverters. However, the optimal power flow problem in its standard form is a large...... scale non-convex optimization problem, and thus can’t be solved precisely and also is computationally heavy and intractable for large systems. This paper examines the use of a convex relaxation using Semi-definite programming to optimally control solar power inverters in a distribution grid in order...
Efficiency measurement with a non-convex free disposal hull technology
DEFF Research Database (Denmark)
Fukuyama, Hirofumi; Hougaard, Jens Leth; Sekitani, Kazuyuki
2016-01-01
We investigate the basic monotonicity properties of least-distance (in)efficiency measures on the class of non-convex FDH (free disposable hull) technologies. We show that any known FDH least-distance measure violates strong monotonicity over the strongly (Pareto-Koopmans) efficient frontier. Tak....... Taking this result into account, we develop a new class of FDH least-distance measures that satisfy strong monotonicity and show that the developed (in)efficiency measurement framework has a natural profit interpretation.......We investigate the basic monotonicity properties of least-distance (in)efficiency measures on the class of non-convex FDH (free disposable hull) technologies. We show that any known FDH least-distance measure violates strong monotonicity over the strongly (Pareto-Koopmans) efficient frontier...
Three-Dimensional Synthetic Aperture Focusing Using a Rocking Convex Array Transducer
DEFF Research Database (Denmark)
Andresen, Henrik; Nikolov, Svetoslav; Pedersen, Mads Møller
2010-01-01
Volumetric imaging can be performed using 1-D arrays in combination with mechanical motion. Outside the elevation focus of the array, the resolution and contrast quickly degrade compared with the lateral plane, because of the fixed transducer focus. This paper shows the feasibility of using...... synthetic aperture focusing for enhancing the elevation focus for a convex rocking array. The method uses a virtual source (VS) for defocused multi-element transmit, and another VS in the elevation focus point. This allows a direct time-of-flight to be calculated for a given 3-D point. To avoid artifacts...... and increase SNR at the elevation VS, a plane-wave VS approach has been implemented. Simulations and measurements using an experimental scanner with a convex rocking array show an average improvement in resolution of 26% and 33%, respectively. This improvement is also seen in in vivo measurements...
Equilibrium prices supported by dual price functions in markets with non-convexities
International Nuclear Information System (INIS)
Bjoerndal, Mette; Joernsten, Kurt
2004-06-01
The issue of finding market clearing prices in markets with non-convexities has had a renewed interest due to the deregulation of the electricity sector. In the day-ahead electricity market, equilibrium prices are calculated based on bids from generators and consumers. In most of the existing markets, several generation technologies are present, some of which have considerable non-convexities, such as capacity limitations and large start up costs. In this paper we present equilibrium prices composed of a commodity price and an uplift charge. The prices are based on the generation of a separating valid inequality that supports the optimal resource allocation. In the case when the sub-problem generated as the integer variables are held fixed to their optimal values possess the integrality property, the generated prices are also supported by non-linear price-functions that are the basis for integer programming duality. (Author)
Surface tension-induced high aspect-ratio PDMS micropillars with concave and convex lens tips
Li, Huawei
2013-04-01
This paper reports a novel method for the fabrication of 3-dimensional (3D) Polydimethylsiloxane (PDMS) micropillars with concave and convex lens tips in a one-step molding process, using a CO2 laser-machined Poly(methyl methacrylate) (PMMA) mold with through holes. The PDMS micropillars are 4 mm high and have an aspect ratio of 251. The micropillars are formed by capillary force drawing up PDMS into the through hole mold. The concave and convex lens tips of the PDMS cylindrical micropillars are induced by surface tension and are controllable by changing the surface wetting properties of the through holes in the PMMA mold. This technique eliminates the requirements of expensive and complicated facilities to prepare a 3D mold, and it provides a simple and rapid method to fabricate 3D PDMS micropillars with controllable dimensions and tip shapes. © 2013 IEEE.
The steady-state of the (Normalized) LMS is schur convex
Al-Hujaili, Khaled A.
2016-06-24
In this work, we demonstrate how the theory of majorization and schur-convexity can be used to assess the impact of input-spread on the Mean Squares Error (MSE) performance of adaptive filters. First, we show that the concept of majorization can be utilized to measure the spread in input-regressors and subsequently order the input-regressors according to their spread. Second, we prove that the MSE of the Least Mean Squares Error (LMS) and Normalized LMS (NLMS) algorithms are schur-convex, that is, the MSE of the LMS and the NLMS algorithms preserve the majorization order of the inputs which provide an analytical justification to why and how much the MSE performance of the LMS and the NLMS algorithms deteriorate as the spread in input increases. © 2016 IEEE.
Surface tension-induced high aspect-ratio PDMS micropillars with concave and convex lens tips
Li, Huawei; Fan, Yiqiang; Yi, Ying; Foulds, Ian G.
2013-01-01
This paper reports a novel method for the fabrication of 3-dimensional (3D) Polydimethylsiloxane (PDMS) micropillars with concave and convex lens tips in a one-step molding process, using a CO2 laser-machined Poly(methyl methacrylate) (PMMA) mold with through holes. The PDMS micropillars are 4 mm high and have an aspect ratio of 251. The micropillars are formed by capillary force drawing up PDMS into the through hole mold. The concave and convex lens tips of the PDMS cylindrical micropillars are induced by surface tension and are controllable by changing the surface wetting properties of the through holes in the PMMA mold. This technique eliminates the requirements of expensive and complicated facilities to prepare a 3D mold, and it provides a simple and rapid method to fabricate 3D PDMS micropillars with controllable dimensions and tip shapes. © 2013 IEEE.
Tensor completion and low-n-rank tensor recovery via convex optimization
International Nuclear Information System (INIS)
Gandy, Silvia; Yamada, Isao; Recht, Benjamin
2011-01-01
In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recovery problem, using a convex relaxation technique proved to be a valuable solution strategy. Here, we will adapt these techniques to the tensor setting. We use the n-rank of a tensor as a sparsity measure and consider the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-rank that fulfills some linear constraints. We introduce a tractable convex relaxation of the n-rank and propose efficient algorithms to solve the low-n-rank tensor recovery problem numerically. The algorithms are based on the Douglas–Rachford splitting technique and its dual variant, the alternating direction method of multipliers
Convex lattice polygons of fixed area with perimeter-dependent weights.
Rajesh, R; Dhar, Deepak
2005-01-01
We study fully convex polygons with a given area, and variable perimeter length on square and hexagonal lattices. We attach a weight tm to a convex polygon of perimeter m and show that the sum of weights of all polygons with a fixed area s varies as s(-theta(conv))eK(t)square root(s) for large s and t less than a critical threshold tc, where K(t) is a t-dependent constant, and theta(conv) is a critical exponent which does not change with t. Using heuristic arguments, we find that theta(conv) is 1/4 for the square lattice, but -1/4 for the hexagonal lattice. The reason for this unexpected nonuniversality of theta(conv) is traced to existence of sharp corners in the asymptotic shape of these polygons.
Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization
Adhikari, Sam
2007-11-01
Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.
A Total Variation Model Based on the Strictly Convex Modification for Image Denoising
Directory of Open Access Journals (Sweden)
Boying Wu
2014-01-01
Full Text Available We propose a strictly convex functional in which the regular term consists of the total variation term and an adaptive logarithm based convex modification term. We prove the existence and uniqueness of the minimizer for the proposed variational problem. The existence, uniqueness, and long-time behavior of the solution of the associated evolution system is also established. Finally, we present experimental results to illustrate the effectiveness of the model in noise reduction, and a comparison is made in relation to the more classical methods of the traditional total variation (TV, the Perona-Malik (PM, and the more recent D-α-PM method. Additional distinction from the other methods is that the parameters, for manual manipulation, in the proposed algorithm are reduced to basically only one.
Botelho, Fabio
2014-01-01
This book introduces the basic concepts of real and functional analysis. It presents the fundamentals of the calculus of variations, convex analysis, duality, and optimization that are necessary to develop applications to physics and engineering problems. The book includes introductory and advanced concepts in measure and integration, as well as an introduction to Sobolev spaces. The problems presented are nonlinear, with non-convex variational formulation. Notably, the primal global minima may not be attained in some situations, in which cases the solution of the dual problem corresponds to an appropriate weak cluster point of minimizing sequences for the primal one. Indeed, the dual approach more readily facilitates numerical computations for some of the selected models. While intended primarily for applied mathematicians, the text will also be of interest to engineers, physicists, and other researchers in related fields.
Directory of Open Access Journals (Sweden)
Suresh Thenozhi
2012-01-01
Full Text Available An important objective of health monitoring systems for tall buildings is to diagnose the state of the building and to evaluate its possible damage. In this paper, we use our prototype to evaluate our data-mining approach for the fault monitoring. The offset cancellation and high-pass filtering techniques are combined effectively to solve common problems in numerical integration of acceleration signals in real-time applications. The integration accuracy is improved compared with other numerical integrators. Then we introduce a novel method for support vector machine (SVM classification, called convex-concave hull. We use the Jarvis march method to decide the concave (nonconvex hull for the inseparable points. Finally the vertices of the convex-concave hull are applied for SVM training.
The steady-state of the (Normalized) LMS is schur convex
Al-Hujaili, Khaled A.; Al-Naffouri, Tareq Y.; Moinuddin, Muhammad
2016-01-01
In this work, we demonstrate how the theory of majorization and schur-convexity can be used to assess the impact of input-spread on the Mean Squares Error (MSE) performance of adaptive filters. First, we show that the concept of majorization can be utilized to measure the spread in input-regressors and subsequently order the input-regressors according to their spread. Second, we prove that the MSE of the Least Mean Squares Error (LMS) and Normalized LMS (NLMS) algorithms are schur-convex, that is, the MSE of the LMS and the NLMS algorithms preserve the majorization order of the inputs which provide an analytical justification to why and how much the MSE performance of the LMS and the NLMS algorithms deteriorate as the spread in input increases. © 2016 IEEE.
Neural Network in Fixed Time for Collision Detection between Two Convex Polyhedra
M. Khouil; N. Saber; M. Mestari
2014-01-01
In this paper, a different architecture of a collision detection neural network (DCNN) is developed. This network, which has been particularly reviewed, has enabled us to solve with a new approach the problem of collision detection between two convex polyhedra in a fixed time (O (1) time). We used two types of neurons, linear and threshold logic, which simplified the actual implementation of all the networks proposed. The study of the collision detection is divided into two sections, the coll...
Annuity factors, duration and convexity : insights from a financial engineering perspective
Ekern, Steinar
1998-01-01
This paper applies a unified and integrative financial engineering perspective to key derived concepts in traditional fixed income analysis, with the purpose of enhancing conceptual insights and motivating computational applications. The emphasis on annuity factors and their impact on duration and convexity differs from the focus prevailing in related discussions. By decomposing the cashflow streams of a coupon bond into different, specific, and clearly defined portfolios of component bonds w...
Report on the observation of IAEA international emergency response exercise ConvEx-3(2008)
International Nuclear Information System (INIS)
Yamamoto, Kazuya; Sumiya, Akihiro
2009-02-01
The International Atomic Energy Agency IAEA carried out a large-scale international emergency response exercise under the designated name of ConvEx-3(2008), accompanying the national exercise of Mexico in July 2008. This review report summarizes two simultaneous observations of the exercises in Mexico and the IAEA headquarter during ConvEx-3(2008). Mexico has established a very steady nuclear emergency response system based on that of US, while only two BWR nuclear power units have been operated yet. The Mexican nuclear emergency response system and the emergency response activities of the Incident and Emergency Centre of the IAEA headquarter impressed important knowledge on observers that is helpful for enhancement of Japanese nuclear emergency response system in the future, e.g. establishment of Emergency Action Level and of implementation of long time exercise and enhancement of prompt protective actions. Japan had established the Act on Special Measures Concerning Nuclear Emergency Preparedness and has developed the nuclear disaster prevention system since the JCO Criticality Accident in Tokai-mura. Now is the new stage to enhance the system on the view point of prevention of a nuclear disaster affecting the neighboring countries' or prevention of a nuclear disaster which arise from the neighboring countries'. The ConvEx-3(2008) suggested key issues about nuclear disaster prevention related to the neighboring countries, e.g. establishment of much wider environmental monitoring and of international assistance system against a foreign nuclear disaster. The observations of the IAEA ConvEx-3(2008) exercise described in this review report were funded by the MEXT (Ministry of Education, Culture, Sports, Science and Technology). (author)
On Difference of Convex Optimization to Visualize Statistical Data and Dissimilarities
DEFF Research Database (Denmark)
Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero
2016-01-01
In this talk we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization problem whose objective...... is the difference of two convex functions (DC). Suitable DC decompositions allow us to use the DCA algorithm in a very efficient way. Our algorithmic approach is used to visualize two real-world datasets....
F-SVM: Combination of Feature Transformation and SVM Learning via Convex Relaxation
Wu, Xiaohe; Zuo, Wangmeng; Zhu, Yuanyuan; Lin, Liang
2015-01-01
The generalization error bound of support vector machine (SVM) depends on the ratio of radius and margin, while standard SVM only considers the maximization of the margin but ignores the minimization of the radius. Several approaches have been proposed to integrate radius and margin for joint learning of feature transformation and SVM classifier. However, most of them either require the form of the transformation matrix to be diagonal, or are non-convex and computationally expensive. In this ...
Convex Hypersurfaces and $L^p$ Estimates for Schr\\"odinger Equations
Zheng, Quan; Yao, Xiaohua; Fan, Da
2004-01-01
This paper is concerned with Schr\\"odinger equations whose principal operators are homogeneous elliptic. When the corresponding level hypersurface is convex, we show the $L^p$-$L^q$ estimate of solution operator in free case. This estimate, combining with the results of fractionally integrated groups, allows us to further obtain the $L^p$ estimate of solutions for the initial data belonging to a dense subset of $L^p$ in the case of integrable potentials.
Highly efficient absorption of visible and near infrared light in convex gold and nickel grooves
DEFF Research Database (Denmark)
Eriksen, René Lynge; Beermann, Jonas; Søndergaard, Thomas
The realization of nonresonant light absorption with nanostructured metal surfaces by making practical use of nanofocusing optical energy in tapered plasmonic waveguides, is of one of the most fascinating and fundamental phenomena in plasmonics [1,2]. We recently realized broadband light absorption...... in gold via adiabatic nanofocusing of gap surface plasmon modes in well-defined geometries of ultra-sharp convex grooves and being excited by scattering off subwavelength-sized wedges [3]....
Geometry intuitive, discrete, and convex : a tribute to László Fejes Tóth
Böröczky, Károly; Tóth, Gábor; Pach, János
2013-01-01
The present volume is a collection of a dozen survey articles, dedicated to the memory of the famous Hungarian geometer, László Fejes Tóth, on the 99th anniversary of his birth. Each article reviews recent progress in an important field in intuitive, discrete, and convex geometry. The mathematical work and perspectives of all editors and most contributors of this volume were deeply influenced by László Fejes Tóth.
Image restoration by the method of convex projections: part 2 applications and numerical results.
Sezan, M I; Stark, H
1982-01-01
The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.
On evolving deformation microstructures in non-convex partially damaged solids
Gurses, Ercan
2011-06-01
The paper outlines a relaxation method based on a particular isotropic microstructure evolution and applies it to the model problem of rate independent, partially damaged solids. The method uses an incremental variational formulation for standard dissipative materials. In an incremental setting at finite time steps, the formulation defines a quasi-hyperelastic stress potential. The existence of this potential allows a typical incremental boundary value problem of damage mechanics to be expressed in terms of a principle of minimum incremental work. Mathematical existence theorems of minimizers then induce a definition of the material stability in terms of the sequential weak lower semicontinuity of the incremental functional. As a consequence, the incremental material stability of standard dissipative solids may be defined in terms of weak convexity notions of the stress potential. Furthermore, the variational setting opens up the possibility to analyze the development of deformation microstructures in the post-critical range of unstable inelastic materials based on energy relaxation methods. In partially damaged solids, accumulated damage may yield non-convex stress potentials which indicate instability and formation of fine-scale microstructures. These microstructures can be resolved by use of relaxation techniques associated with the construction of convex hulls. We propose a particular relaxation method for partially damaged solids and investigate it in one- and multi-dimensional settings. To this end, we introduce a new isotropic microstructure which provides a simple approximation of the multi-dimensional rank-one convex hull. The development of those isotropic microstructures is investigated for homogeneous and inhomogeneous numerical simulations. © 2011 Elsevier Ltd. All rights reserved.
A Sequential Convex Semidefinite Programming Algorithm for Multiple-Load Free Material Optimization
Czech Academy of Sciences Publication Activity Database
Stingl, M.; Kočvara, Michal; Leugering, G.
2009-01-01
Roč. 20, č. 1 (2009), s. 130-155 ISSN 1052-6234 R&D Projects: GA AV ČR IAA1075402 Grant - others:commision EU(XE) EU-FP6-30717 Institutional research plan: CEZ:AV0Z10750506 Keywords : structural optimization * material optimization * semidefinite programming * sequential convex programming Subject RIV: BA - General Mathematics Impact factor: 1.429, year: 2009
International Nuclear Information System (INIS)
Koga, T.; Kasai, Y.; Dehesa, J.S.; Angulo, J.C.
1993-01-01
The electron-pair function h(u) of a finite many-electron system is not monotonic, but the related quantity h(u)/u α , α>0, is not only monotonically decreasing from the origin but also convex for the values α 1 and α 2 , respectively, as has been recently found. Here, it is first argued that this quantity is also logarithmically convex for any α≥α' with α'=max{-u 2 d2[lnh(u)]/du 2 }. Then this property is used to obtain a general inequality which involves three interelectronic moments left-angle u t right-angle. Particular cases of this inequality involve relevant characteristics of the system such as the number of electrons and the total electron-electron repulsion energy. Second, the logarithmic-convexity property of h(u) as well as the accuracy of this inequality are investigated by the optimum 20-term Hylleraas-type wave functions for two-electron atoms with nuclear charge Z=1, 2, 3, 5, and 10. It is found that (i) 14 2 much-gt α 1 ) and (ii) the accuracy of the inequality which involves moments of contiguous orders oscillates between 62.4% and 96.7% according to the specific He-like atom and the moments involved. Finally, the importance of the logarithmic-convexity effects on the interelectronic moments relative to those coming from other monotonicity properties of h(u)/u α are analyzed in the same numerical Hylleraas framework
A Combination Theorem for Convex Hyperbolic Manifolds, with Applications to Surfaces in 3-Manifolds
Baker, Mark; Cooper, Daryl
2005-01-01
We prove the convex combination theorem for hyperbolic n-manifolds. Applications are given both in high dimensions and in 3 dimensions. One consequence is that given two geometrically finite subgroups of a discrete group of isometries of hyperbolic n-space, satisfying a natural condition on their parabolic subgroups, there are finite index subgroups which generate a subgroup that is an amalgamated free product. Constructions of infinite volume hyperbolic n-manifolds are described by gluing lo...
Multilayer Spectral Graph Clustering via Convex Layer Aggregation: Theory and Algorithms
Chen, Pin-Yu; Hero, Alfred O.
2017-01-01
Multilayer graphs are commonly used for representing different relations between entities and handling heterogeneous data processing tasks. Non-standard multilayer graph clustering methods are needed for assigning clusters to a common multilayer node set and for combining information from each layer. This paper presents a multilayer spectral graph clustering (SGC) framework that performs convex layer aggregation. Under a multilayer signal plus noise model, we provide a phase transition analys...
Dey, C.; Dey, S. K.
1983-01-01
An explicit finite difference scheme consisting of a predictor and a corrector has been developed and applied to solve some hyperbolic partial differential equations (PDEs). The corrector is a convex-type function which is applied at each time level and at each mesh point. It consists of a parameter which may be estimated such that for larger time steps the algorithm should remain stable and generate a fast speed of convergence to the steady-state solution. Some examples have been given.
Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods
2016-11-16
burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis...10 3.4 Defining the Mean Response Vector, ECD Scale Matrix, Slack Variables and their Con- straints for Convex Optimization...parametrized for optimization and the objective function thus becomes, ln(det(C(θ )))≥ ln(det(F−1(θ ;s))) =− ln(det(F (θ ;s))) (29) where s are the slack
Coarse-convex-compactification approach to numerical solution of nonconvex variational problems
Czech Academy of Sciences Publication Activity Database
Meziat, R.; Roubíček, Tomáš; Patino, D.
2010-01-01
Roč. 31, č. 4 (2010), s. 460-488 ISSN 0163-0563 Grant - others:GA MŠk(CZ) LC06052 Program:LC Institutional research plan: CEZ:AV0Z20760514 Keywords : convex approximations * method of moments * relaxed variational problems Subject RIV: BA - General Mathematics Impact factor: 0.687, year: 2010 http://www.informaworld.com/smpp/content~db=all~content=a922886514~frm=titlelink
Monomial Crystals and Partition Crystals
Tingley, Peter
2010-04-01
Recently Fayers introduced a large family of combinatorial realizations of the fundamental crystal B(Λ0) for ^sln, where the vertices are indexed by certain partitions. He showed that special cases of this construction agree with the Misra-Miwa realization and with Berg's ladder crystal. Here we show that another special case is naturally isomorphic to a realization using Nakajima's monomial crystal.
Directory of Open Access Journals (Sweden)
Kazuyuki Aihara
2011-04-01
Full Text Available The classical information-theoretic measures such as the entropy and the mutual information (MI are widely applicable to many areas in science and engineering. Csiszar generalized the entropy and the MI by using the convex functions. Recently, we proposed the grid occupancy (GO and the quasientropy (QE as measures of independence. The QE explicitly includes a convex function in its definition, while the expectation of GO is a subclass of QE. In this paper, we study the effect of different convex functions on GO, QE, and Csiszar’s generalized mutual information (GMI. A quality factor (QF is proposed to quantify the sharpness of their minima. Using the QF, it is shown that these measures can have sharper minima than the classical MI. Besides, a recursive algorithm for computing GMI, which is a generalization of Fraser and Swinney’s algorithm for computing MI, is proposed. Moreover, we apply GO, QE, and GMI to chaotic time series analysis. It is shown that these measures are good criteria for determining the optimum delay in strange attractor reconstruction.
Keshavarzi, Alireza; Noori, Lila Khaje
2010-12-01
River bed scourings are a major environmental problem for fish and aquatic habitat resources. In this study, to prevent river bed and banks from scouring, different types of bed sills including convex, concave and linear patterns were installed in a movable channel bed in a laboratory flume. The bed sills were tested with nine different arrangements and under different flow conditions. To find the most effective bed sill pattern, the scouring depth was measured downstream of the bed sill for a long experimental duration. The scour depth was measured at the middle and at the end of each experimental test for different ratios of the arch radius to the channel width [r/w]. The experimental results indicated that the convex pattern with r/w=0.35 produced minimum bed scouring depth at the center line whereas the concave pattern with r/w=0.23 produced the minimum scour depth at the wall banks. Therefore, the convex pattern was the most effective configuration for prevention of scouring at the center line of the river while the concave pattern was very effective to prevent scouring at the river banks. These findings can be suggested to be used in practical applications.
Nezir, Veysel; Mustafa, Nizami
2017-04-01
In 2008, P.K. Lin provided the first example of a nonreflexive space that can be renormed to have fixed point property for nonexpansive mappings. This space was the Banach space of absolutely summable sequences l1 and researchers aim to generalize this to c0, Banach space of null sequences. Before P.K. Lin's intriguing result, in 1979, Goebel and Kuczumow showed that there is a large class of non-weak* compact closed, bounded, convex subsets of l1 with fixed point property for nonexpansive mappings. Then, P.K. Lin inspired by Goebel and Kuczumow's ideas to give his result. Similarly to P.K. Lin's study, Hernández-Linares worked on L1 and in his Ph.D. thesis, supervisored under Maria Japón, showed that L1 can be renormed to have fixed point property for affine nonexpansive mappings. Then, related questions for c0 have been considered by researchers. Recently, Nezir constructed several equivalent norms on c0 and showed that there are non-weakly compact closed, bounded, convex subsets of c0 with fixed point property for affine nonexpansive mappings. In this study, we construct a family of equivalent norms containing those developed by Nezir as well and show that there exists a large class of non-weakly compact closed, bounded, convex subsets of c0 with fixed point property for affine nonexpansive mappings.
Convex-based void filling method for CAD-based Monte Carlo geometry modeling
International Nuclear Information System (INIS)
Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin
2015-01-01
Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time
Towards reproducible experimental studies for non-convex polyhedral shaped particles
Wilke, Daniel N.; Pizette, Patrick; Govender, Nicolin; Abriak, Nor-Edine
2017-06-01
The packing density and flat bottomed hopper discharge of non-convex polyhedral particles are investigated in a systematic experimental study. The motivation for this study is two-fold. Firstly, to establish an approach to deliver quality experimental particle packing data for non-convex polyhedral particles that can be used for characterization and validation purposes of discrete element codes. Secondly, to make the reproducibility of experimental setups as convenient and readily available as possible using affordable and accessible technology. The primary technology for this study is fused deposition modeling used to 3D print polylactic acid (PLA) particles using readily available 3D printer technology. A total of 8000 biodegradable particles were printed, 1000 white particles and 1000 black particles for each of the four particle types considered in this study. Reproducibility is one benefit of using fused deposition modeling to print particles, but an extremely important additional benefit is that specific particle properties can be explicitly controlled. As an example in this study the volume fraction of each particle can be controlled i.e. the effective particle density can be adjusted. In this study the particle volumes reduces drastically as the non-convexity is increased, however all printed white particles in this study have the same mass within 2% of each other.
A two-layer recurrent neural network for nonsmooth convex optimization problems.
Qin, Sitian; Xue, Xiaoping
2015-06-01
In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
International Nuclear Information System (INIS)
Hoffmann, Aswin L; Siem, Alex Y D; Hertog, Dick den; Kaanders, Johannes H A M; Huizenga, Henk
2006-01-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning
Towards reproducible experimental studies for non-convex polyhedral shaped particles
Directory of Open Access Journals (Sweden)
Wilke Daniel N.
2017-01-01
Full Text Available The packing density and flat bottomed hopper discharge of non-convex polyhedral particles are investigated in a systematic experimental study. The motivation for this study is two-fold. Firstly, to establish an approach to deliver quality experimental particle packing data for non-convex polyhedral particles that can be used for characterization and validation purposes of discrete element codes. Secondly, to make the reproducibility of experimental setups as convenient and readily available as possible using affordable and accessible technology. The primary technology for this study is fused deposition modeling used to 3D print polylactic acid (PLA particles using readily available 3D printer technology. A total of 8000 biodegradable particles were printed, 1000 white particles and 1000 black particles for each of the four particle types considered in this study. Reproducibility is one benefit of using fused deposition modeling to print particles, but an extremely important additional benefit is that specific particle properties can be explicitly controlled. As an example in this study the volume fraction of each particle can be controlled i.e. the effective particle density can be adjusted. In this study the particle volumes reduces drastically as the non-convexity is increased, however all printed white particles in this study have the same mass within 2% of each other.
Novel method of finding extreme edges in a convex set of N-dimension vectors
Hu, Chia-Lun J.
2001-11-01
As we published in the last few years, for a binary neural network pattern recognition system to learn a given mapping {Um mapped to Vm, m=1 to M} where um is an N- dimension analog (pattern) vector, Vm is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Ymi, m=1 to M} (where Ymithere existsVmiUm and Vmi=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.
Schein, Stan; Gayed, James Maurice
2014-02-25
The three known classes of convex polyhedron with equal edge lengths and polyhedral symmetry--tetrahedral, octahedral, and icosahedral--are the 5 Platonic polyhedra, the 13 Archimedean polyhedra--including the truncated icosahedron or soccer ball--and the 2 rhombic polyhedra reported by Johannes Kepler in 1611. (Some carbon fullerenes, inorganic cages, icosahedral viruses, geodesic structures, and protein complexes resemble these fundamental shapes.) Here we add a fourth class, "Goldberg polyhedra," which are also convex and equilateral. We begin by decorating each of the triangular facets of a tetrahedron, an octahedron, or an icosahedron with the T vertices and connecting edges of a "Goldberg triangle." We obtain the unique set of internal angles in each planar face of each polyhedron by solving a system of n equations and n variables, where the equations set the dihedral angle discrepancy about different types of edge to zero, and the variables are a subset of the internal angles in 6gons. Like the faces in Kepler's rhombic polyhedra, the 6gon faces in Goldberg polyhedra are equilateral and planar but not equiangular. We show that there is just a single tetrahedral Goldberg polyhedron, a single octahedral one, and a systematic, countable infinity of icosahedral ones, one for each Goldberg triangle. Unlike carbon fullerenes and faceted viruses, the icosahedral Goldberg polyhedra are nearly spherical. The reasoning and techniques presented here will enable discovery of still more classes of convex equilateral polyhedra with polyhedral symmetry.
Fan, Yiqiang; Li, Huawei; Foulds, Ian G.
2013-01-01
This paper reports a new technique of fabricating polystyrene microlenses with both convex and concave profiles that are integrated in polymer-based microfluidic system. The polystyrene microlenses, or microlens array, are fabricated using the free
Becerril, S.; Mirabet, E.; Lizon, J. L.; Abril, M.; Cárdenas, C.; Ferro, I.; Morales, R.; Pérez, D.; Ramón, A.; Sánchez-Carrasco, M. A.; Quirrenbach, A.; Amado, P.; Ribas, I.; Reiners, A.; Caballero, J. A.; Seifert, W.; Herranz, J.
2016-07-01
CARMENES is the new high-resolution high-stability spectrograph built for the 3.5m telescope at the Calar Alto Observatory (CAHA, Almería, Spain) by a consortium formed by German and Spanish institutions. This instrument is composed by two separated spectrographs: VIS channel (550-1050 nm) and NIR channel (950- 1700 nm). The NIR-channel spectrograph's responsible is the Instituto de Astrofísica de Andalucía (IAACSIC). It has been manufactured, assembled, integrated and verified in the last two years, delivered in fall 2015 and commissioned in December 2015. One of the most challenging systems in this cryogenic channel involves the Cooling System. Due to the highly demanding requirements applicable in terms of stability, this system arises as one of the core systems to provide outstanding stability to the channel. Really at the edge of the state-of-the-art, the Cooling System is able to provide to the cold mass ( 1 Ton) better thermal stability than few hundredths of degree within 24 hours (goal: 0.01K/day). The present paper describes the Assembly, Integration and Verification phase (AIV) of the CARMENES-NIR channel Cooling System implemented at IAA-CSIC and later installation at CAHA 3.5m Telescope, thus the most relevant highlights being shown in terms of thermal performance. The CARMENES NIR-channel Cooling System has been implemented by the IAA-CSIC through very fruitful collaboration and involvement of the ESO (European Southern Observatory) cryo-vacuum department with Jean-Louis Lizon as its head and main collaborator. The present work sets an important trend in terms of cryogenic systems for future E-ELT (European Extremely Large Telescope) large-dimensioned instrumentation in astrophysics.
France, Kevin; Hoadley, Keri; Fleming, Brian T.; Kane, Robert; Nell, Nicholas; Beasley, Matthew; Green, James C.
2016-03-01
NASA’s suborbital program provides an opportunity to conduct unique science experiments above Earth’s atmosphere and is a pipeline for the technology and personnel essential to future space astrophysics, heliophysics, and atmospheric science missions. In this paper, we describe three astronomy payloads developed (or in development) by the Ultraviolet Rocket Group at the University of Colorado. These far-ultraviolet (UV) (100-160nm) spectrographic instruments are used to study a range of scientific topics, from gas in the interstellar medium (accessing diagnostics of material spanning five orders of magnitude in temperature in a single observation) to the energetic radiation environment of nearby exoplanetary systems. The three instruments, Suborbital Local Interstellar Cloud Experiment (SLICE), Colorado High-resolution Echelle Stellar Spectrograph (CHESS), and Suborbital Imaging Spectrograph for Transition region Irradiance from Nearby Exoplanet host stars (SISTINE) form a progression of instrument designs and component-level technology maturation. SLICE is a pathfinder instrument for the development of new data handling, storage, and telemetry techniques. CHESS and SISTINE are testbeds for technology and instrument design enabling high-resolution (R>105) point source spectroscopy and high throughput imaging spectroscopy, respectively, in support of future Explorer, Probe, and Flagship-class missions. The CHESS and SISTINE payloads support the development and flight testing of large-format photon-counting detectors and advanced optical coatings: NASA’s top two technology priorities for enabling a future flagship observatory (e.g. the LUVOIR Surveyor concept) that offers factors of ˜50-100 gain in UV spectroscopy capability over the Hubble Space Telescope. We present the design, component level laboratory characterization, and flight results for these instruments.
Optimization of a space spectrograph main frame and frequency response analysis of the frame
Zhang, Xin-yu; Chen, Zhi-yuan; Yang, Shi-mo
2009-07-01
A space spectrograph main structure is optimized and examined in order to satisfy the space operational needs. The space spectrograph will be transported into its operational orbit by the launch vehicle and it will undergo dynamic environment in the spacecraft injection period. The unexpected shocks may cause declination of observation accuracy and even equipment damages. The main frame is one of the most important parts because its mechanical performance has great influence on the operational life of the spectrograph, accuracy of observation, etc. For the reason of cost reduction and stability confirming, lower weight and higher structure stiffness of the frame are simultaneously required. Structure optimization was conducted considering the initial design modal analysis results. The base modal frequency raised 10.34% while the whole weight lowered 8.63% compared to the initial design. The purpose of this study is to analyze the new design of main frame mechanical properties and verify whether it can satisfy strict optical demands under the dynamic impact during spacecraft injection. For realizing and forecasting the frequency response characteristics of the main structure in mechanical environment experiment, dynamic analysis of the structure should be performed simulating impulse loads from the bottom base. Therefore, frequency response analysis (FRA) of the frame was then performed using the FEA software MSC.PATRAN/NASTRAN. Results of shock response spectrum (SRS) responses from the base excitations were given. Stress and acceleration dynamic responses of essential positions in the spacecraft injection course were also calculated and spectrometer structure design was examined considering stiffness / strength demands. In this simulation, maximum stresses of Cesic material in two acceleration application cases are 45.1 and 74.1 MPa, respectively. They are all less than yield strengths. As is demonstrated from the simulation, strength reservation of the frame is
PEPSI, the High-Resolution Optical-IR Spectrograph for the LBT
Andersen, Michael; Strassmeier, Klaus; Hoffman, Axel; Woche, Manfred; Spano, Paolo
PEPSI is a high resolution fibre feed optical-IR polarimetric echelle spectrograph for the Large Binocular Telescope (LBT). PEPSI utilizes the two 8.4m LBT apertures to simultaneously record four polarization states at a resolution of 120.000. The extension of the coverage towards the IR is mainly motivated by the larger Zeeman splitting of IR lines, which would allow to study weaker/fainter magnetic structures on stars. The two optical arms, which also have an integral light mode with R up to 300.000, are under construction, while the IR arm is being designed.
Semiquantitative spectrographic analysis of nuclear interest minerals and of various products
International Nuclear Information System (INIS)
Alvarez Gonzalez, F.; Roca Adell, M.; Fernandez Cellini, R.
1958-01-01
Because the great number of samples of various kinds receiving in the Chemical Division, minerals in the most part, for its complete analysis, a rapid spectrographic method has been developed. It permits the determination of the following elements with a semiquantitative character. Al, As, Ag, Au, B, Be, Bi, Ca, Cd, Ce, Co, Cr, Cu, Fe, Ga, Ge, Hf, Hn, In, K, La, Li, Mg, Mn, Mo, Na, Nb, P, Pb, Pt, Sb, Si, Sn, Sr, Ta, Ti, V, W, Y, Zn and Zr. (Author) 14 refs
Energy Technology Data Exchange (ETDEWEB)
Roca, M
1967-07-01
A spectrographic method of analysis has been developed for uranium-molybdenum alloys containing up to 10 % Mo. The carrier distillation technique, with gallium oxide and graphite as carriers, is used for the semiquantitative determination of Al, Cr, Fe, Ni and Si, involving the conversion of the samples into oxides. As a consequence of the study of the influence of the molybdenum on the line intensities, it is useful to prepare only one set of standards with 0,6 % MoO{sub 3}. Total burning excitation is used for calcium, employing two sets of standards with 0,6 and 7.5 MoO{sub 3}. (Author) 5 refs.
Optical emission spectrographic analysis of thulium oxide for rare earth impurities
International Nuclear Information System (INIS)
Chandola, L.C.; Khanna, P.P.; Dixit, V.C.
1988-01-01
An optical emission spectrographic method has been developed for the analysis of high purity thulium oxide to determine rare earth elements Er, Yb, Lu and Y. A 1200 groove/mm grating blazed at 3300 A is used to record the spectrum on Kodak SA-1 photographic plates after the excitation of the graphite-sample (1:1) mixture in DC arc. The determination range is 0.008 per cent to 0.1 per cent and the relative standard deviation is 17.6 per cent. (author). 15 refs., 5 tables, 5 figs