Ultrabroadband optical chirp linearization for precision metrology applications.
Roos, Peter A; Reibel, Randy R; Berg, Trenton; Kaylor, Brant; Barber, Zeb W; Babbitt, Wm Randall
2009-12-01
We demonstrate precise linearization of ultrabroadband laser frequency chirps via a fiber-based self-heterodyne technique to enable extremely high-resolution, frequency-modulated cw laser-radar (LADAR) and a wide range of other metrology applications. Our frequency chirps cover bandwidths up to nearly 5 THz with frequency errors as low as 170 kHz, relative to linearity. We show that this performance enables 31-mum transform-limited LADAR range resolution (FWHM) and 86 nm range precisions over a 1.5 m range baseline. Much longer range baselines are possible but are limited by atmospheric turbulence and fiber dispersion.
Sfermion Precision Measurements at a Linear Collider
Freitas, A.; Ananthanarayan, B.; Bartl, A.; Blair, G.A.; Blochinger, C.; Boos, E.; Brandenburg, A.; Datta, A.; Djouadi, A.; Fraas, H.; Guasch, J.; Hesselbach, S.; Hidaka, K.; Hollik, W.; Kernreiter, T.; Maniatis, M.; von Manteuffel, A.; Martyn, H.U.; Miller, D.J.; Moortgat-Pick, Gudrid A.; Muhlleitner, M.; Nauenberg, U.; Kluge, Hannelies; Porod, W.; Sola, J.; Sopczak, A.; Stahl, A.; Weber, M.M.; Zerwas, P.M.
2002-01-01
At future e+- e- linear colliders, the event rates and clean signals of scalar fermion production - in particular for the scalar leptons - allow very precise measurements of their masses and couplings and the determination of their quantum numbers. Various methods are proposed for extracting these parameters from the data at the sfermion thresholds and in the continuum. At the same time, NLO radiative corrections and non-zero width effects have been calculated in order to match the experimental accuracy. The substantial mixing expected for the third generation sfermions opens up additional opportunities. Techniques are presented for determining potential CP-violating phases and for extracting tan(beta) from the stau sector, in particular at high values. The consequences of possible large mass differences in the stop and sbottom system are explored in dedicated analyses.
Precise linear gating circuit on integrated microcircuits
Butskii, V.V.; Vetokhin, S.S.; Reznikov, I.V.
Precise linear gating circuit on four microcircuits is described. A basic flowsheet of the gating circuit is given. The gating circuit consists of two input differential cascades total load of which is two current followers possessing low input and high output resistances. Follower outlets are connected to high ohmic dynamic load formed with a current source which permits to get high amplification (>1000) at one cascade. Nonlinearity amounts to <0.1% in the range of input signal amplitudes of -10-+10 V. Front duration for an output signal with 10 V amplitude amounts to 100 ns. Attenuation of input signal with a closed gating circuit is 60 db. The gating circuits described is used in the device intended for processing of scintillation sensor signals.
Sfermion precision measurements at a linear collider
Freitas, A.; Ananthanarayan, B.; Bartl, A.; Blair, G.; Bloechinger, C.; Boos, E.; Brandenburg, A.; Datta, A.; Djouadi, A.; Fraas, H.; Guasch, J.; Hesselbach, S.; Hidaka, K.; Hollik, W.; Kernreiter, T.; Maniatis, M.; Manteuffel, A. von; Martyn, H.-U.; Miller, D.J.; Moortgat-Pick, G.; Muehlleitner, M.; Nauenberg, U.; Nowak, H.; Porod, W.; Sola, J.; Sopczak, A.; Stahl, A.; Weber, M.M.; Zerwas, P.M.
2003-01-01
At prospective e ± e - linear colliders, the large cross-sections and clean signals of scalar fermion production--in particular for the scalar leptons - allow very precise measurements of their masses and couplings and the determination of their quantum numbers. Various methods are proposed for extracting these parameters from the data at the sfermion thresholds and in the continuum. At the same time, NLO radiative corrections and non-zero width effects have been calculated in order to match the experimental accuracy. The substantial mixing expected in the third generation opens up additional opportunities. Techniques are presented for determining potential CP-violating phases and for extracting tan β from the stau sector, in particular at high values. The consequences of possible large mass differences in the stop and sbottom system are explored in dedicated analyses
Sfermion precision measurements at a linear collider
Freitas, A.
2003-01-01
At future e + e - linear colliders, the event rates and clean signals of scalar fermion production--in particular for the scalar leptons--allow very precise measurements of their masses and couplings and the determination of their quantum numbers. Various methods are proposed for extracting these parameters from the data at the sfermion thresholds and in the continuum. At the same time, NLO radiative corrections and non-zero width effects have been calculated in order to match the experimental accuracy. The substantial mixing expected for the third generation sfermions opens up additional opportunities. Techniques are presented for determining potential CP-violating phases and for extracting tan β from the stau sector, in particular at high values. The consequences of possible large mass differences in the stop and sbottom system are explored in dedicated analyses
Verification of Linear (In)Dependence in Finite Precision Arithmetic
Rohn, Jiří
2014-01-01
Roč. 8, č. 3-4 (2014), s. 323-328 ISSN 1661-8289 Institutional support: RVO:67985807 Keywords : linear dependence * linear independence * pseudoinverse matrix * finite precision arithmetic * verification * MATLAB file Subject RIV: BA - General Mathematics
Fundamental limits of scintillation detector timing precision
Derenzo, Stephen E; Choong, Woon-Seng; Moses, William W
2014-01-01
In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu 2 SiO 5 :Ce and LaBr 3 :Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10 000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A −1/2 more than any other factor, we tabulated the parameter B, where R = BA −1/2 . An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10 000 photoelectrons ns −1 . A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10 000 photoelectrons ns −1 . (paper)
Precision measurements of linear scattering density using muon tomography
Åström, E.; Bonomi, G.; Calliari, I.; Calvini, P.; Checchia, P.; Donzella, A.; Faraci, E.; Forsberg, F.; Gonella, F.; Hu, X.; Klinger, J.; Sundqvist Ökvist, L.; Pagano, D.; Rigoni, A.; Ramous, E.; Urbani, M.; Vanini, S.; Zenoni, A.; Zumerle, G.
2016-07-01
We demonstrate that muon tomography can be used to precisely measure the properties of various materials. The materials which have been considered have been extracted from an experimental blast furnace, including carbon (coke) and iron oxides, for which measurements of the linear scattering density relative to the mass density have been performed with an absolute precision of 10%. We report the procedures that are used in order to obtain such precision, and a discussion is presented to address the expected performance of the technique when applied to heavier materials. The results we obtain do not depend on the specific type of material considered and therefore they can be extended to any application.
Polarimetry at a Future Linear Collider - How Precise?
Woods, Michael B
2000-01-01
At a future linear collider, a polarized electron beam will play an important role in interpreting new physics signals. Backgrounds to a new physics reaction can be reduced by choice of the electron polarization state. The origin of a new physics reaction can be clarified by measuring its polarization-dependence. This paper examines some options for polarimetry with an emphasis on physics issues that motivate how precise the polarization determination needs to be. In addition to Compton polarimetry, the possibility of using Standard Model asymmetries, such as the asymmetry in forward W-pairs, is considered as a possible polarimeter. Both e + e - and e + e - collider modes are considered
Possible limits of plasma linear colliders
Zimmermann, F.
2017-07-01
Plasma linear colliders have been proposed as next or next-next generation energy-frontier machines for high-energy physics. I investigate possible fundamental limits on energy and luminosity of such type of colliders, considering acceleration, multiple scattering off plasma ions, intrabeam scattering, bremsstrahlung, and betatron radiation. The question of energy efficiency is also addressed.
Diffusive limits for linear transport equations
Pomraning, G.C.
1992-01-01
The authors show that the Hibert and Chapman-Enskog asymptotic treatments that reduce the nonlinear Boltzmann equation to the Euler and Navier-Stokes fluid equations have analogs in linear transport theory. In this linear setting, these fluid limits are described by diffusion equations, involving familiar and less familiar diffusion coefficients. Because of the linearity extant, one can carry out explicitly the initial and boundary layer analyses required to obtain asymptotically consistent initial and boundary conditions for the diffusion equations. In particular, the effects of boundary curvature and boundary condition variation along the surface can be included in the boundary layer analysis. A brief review of heuristic (nonasymptotic) diffusion description derivations is also included in our discussion
Beam-intensity limitations in linear accelerators
Jameson, R.A.
1981-01-01
Recent demand for high-intensity beams of various particles has renewed interest in the investigation of beam current and beam quality limits in linear RF and induction accelerators and beam-transport channels. Previous theoretical work is reviewed, and new work on beam matching and stability is outlined. There is a real need for extending the theory to handle the time evolution of beam emittance; some present work toward this goal is described. The role of physical constraints in channel intensity limitation is emphasized. Work on optimizing channel performance, particularly at low particle velocities, has resulted in major technological advances. The opportunities for combining such channels into arrays are discussed. 50 references
Space-charge limits in linear accelerators
Wangler, T.P.
1980-12-01
This report presents equations that allow an approximate evaluation of the limiting beam current for a large class of radio-frequency linear accelerators, which use quadrupole strong focusing. Included are the Alvarez, the Wideroe, and the radio-frequency quadrupole linacs. The limiting-current formulas are presented for both the longitudinal and the transverse degrees of freedom by assuming that the average space-charge force in the beam bunch arises from a uniformly distributed charge within an azimuthally symmetric three-dimensional ellipsoid. The Mathieu equation is obtained as an approximate, but general, form for the transverse equation of motion. The smooth-approximation method is used to obtain a solution and an expression for the transverse current limit. The form of the current-limit formulas for different linac constraints is discussed
A linear actuator for precision positioning of dual objects
Peng, Yuxin; Cao, Jie; Guo, Zhao; Yu, Haoyong
2015-01-01
In this paper, a linear actuator for precision positioning of dual objects is proposed based on a double friction drive principle using a single piezoelectric element (PZT). The linear actuator consists of an electromagnet and a permanent magnet, which are connected by the PZT. The electromagnet serves as an object 1, and another object (object 2) is attached on the permanent magnet by the magnetic force. For positioning the dual objects independently, two different friction drive modes can be alternated by an on–off control of the electromagnet. When the electromagnet releases from the guide way, it can be driven by impact friction force generated by the PZT. Otherwise, when the electromagnet clamps on the guide way and remains stationary, the object 2 can be driven based on the principle of smooth impact friction drive. A prototype was designed and constructed and experiments were carried out to test the basic performance of the actuator. It has been verified that with a compact size of 31 mm (L) × 12 mm (W) × 8 mm (H), the two objects can achieve long strokes on the order of several millimeters and high resolutions of several tens of nanometers. Since the proposed actuator allows independent movement of two objects by a single PZT, the actuator has the potential to be constructed compactly. (paper)
Accuracy Limitations in Optical Linear Algebra Processors
Batsell, Stephen Gordon
1990-01-01
One of the limiting factors in applying optical linear algebra processors (OLAPs) to real-world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication and addition operations, noise from spatial variations across arrays, and from crosstalk. In this dissertation, we propose a second-order statistical model for an OLAP which incorporates all these system noise sources. We now apply this knowledge to determining upper and lower bounds on the achievable accuracy. This is accomplished by first translating the standard definition of accuracy used in electronic digital processors to analog optical processors. We then employ our second-order statistical model. Having determined a general accuracy equation, we consider limiting cases such as for ideal and noisy components. From the ideal case, we find the fundamental limitations on improving analog processor accuracy. From the noisy case, we determine the practical limitations based on both device and system noise sources. These bounds allow system trade-offs to be made both in the choice of architecture and in individual components in such a way as to maximize the accuracy of the processor. Finally, by determining the fundamental limitations, we show the system engineer when the accuracy desired can be achieved from hardware or architecture improvements and when it must come from signal pre-processing and/or post-processing techniques.
Metric preheating and limitations of linearized gravity
Bassett, Bruce A.; Tamburini, Fabrizio; Kaiser, David I.; Maartens, Roy
1999-01-01
During the preheating era after inflation, resonant amplification of quantum field fluctuations takes place. Recently it has become clear that this must be accompanied by resonant amplification of scalar metric fluctuations, since the two are united by Einstein's equations. Furthermore, this 'metric preheating' enhances particle production, and leads to gravitational rescattering effects even at linear order. In multi-field models with strong preheating (q>>1), metric perturbations are driven non-linear, with the strongest amplification typically on super-Hubble scales (k→0). This amplification is causal, being due to the super-Hubble coherence of the inflaton condensate, and is accompanied by resonant growth of entropy perturbations. The amplification invalidates the use of the linearized Einstein field equations, irrespective of the amount of fine-tuning of the initial conditions. This has serious implications on all scales - from large-angle cosmic microwave background (CMB) anisotropies to primordial black holes. We investigate the (q,k) parameter space in a two-field model, and introduce the time to non-linearity, t nl , as the timescale for the breakdown of the linearized Einstein equations. t nl is a robust indicator of resonance behavior, showing the fine structure in q and k that one expects from a quasi-Floquet system, and we argue that t nl is a suitable generalization of the static Floquet index in an expanding universe. Backreaction effects are expected to shut down the linear resonances, but cannot remove the existing amplification, which threatens the viability of strong preheating when confronted with the CMB. Mode-mode coupling and turbulence tend to re-establish scale invariance, but this process is limited by causality and for small k the primordial scale invariance of the spectrum may be destroyed. We discuss ways to escape the above conclusions, including secondary phases of inflation and preheating solely to fermions. The exclusion principle
Fast and precise luminosity measurement at the international linear ...
6. — journal of. December 2007 physics pp. 1151–1157. Fast and precise luminosity measurement ... The fast investigation of the collision quality for intrabunch feedback and the ... consisting of the sensor, the absorber and an interconnection structure. 2. ... outer radius of BeamCal is increased to keep the angular overlap.
Fast and precise luminosity measurement at the international linear
The detectors of the ILC will feature a calorimeter system in the very forward region. The system comprises mainly two electromagnetic calorimeters: LumiCal, which is dedicated to the measurement of the absolute luminosity with highest precision and BeamCal, which uses the energy deposition from beamstrahlung pairs ...
Nonlinear dynamics between linear and impact limits
Pilipchuk, Valery N; Wriggers, Peter
2010-01-01
This book examines nonlinear dynamic analyses based on the existence of strongly nonlinear but simple counterparts to the linear models and tools. Discusses possible application to periodic elastic structures with non-smooth or discontinuous characteristics.
Noise limitations in optical linear algebra processors.
Batsell, S G; Jong, T L; Walkup, J F; Krile, T F
1990-05-10
A general statistical noise model is presented for optical linear algebra processors. A statistical analysis which includes device noise, the multiplication process, and the addition operation is undertaken. We focus on those processes which are architecturally independent. Finally, experimental results which verify the analytical predictions are also presented.
Accuracy, precision, and lower detection limits (a deficit reduction approach)
Bishop, C.T.
1993-01-01
The evaluation of the accuracy, precision and lower detection limits of the determination of trace radionuclides in environmental samples can become quite sophisticated and time consuming. This in turn could add significant cost to the analyses being performed. In the present method, a open-quotes deficit reduction approachclose quotes has been taken to keep costs low, but at the same time provide defensible data. In order to measure the accuracy of a particular method, reference samples are measured over the time period that the actual samples are being analyzed. Using a Lotus spreadsheet, data are compiled and an average accuracy is computed. If pairs of reference samples are analyzed, then precision can also be evaluated from the duplicate data sets. The standard deviation can be calculated if the reference concentrations of the duplicates are all in the same general range. Laboratory blanks are used to estimate the lower detection limits. The lower detection limit is calculated as 4.65 times the standard deviation of a set of blank determinations made over a given period of time. A Lotus spreadsheet is again used to compile data and LDLs over different periods of time can be compared
Toward Precision Top Quark Measurements in $e^+e^−$ Collisions at Linear Colliders
Van Der Kolk, Naomi
2017-01-01
Linear lepton colliders offer an excellent environment for precision measurements of the top quark. An overview is given of the current prospects on the measurement of the top quark mass, rare top quark decays and top quark couplings at the International Linear Collider (ILC) and the Compact Linear Collider (CLIC).
Super-linear Precision in Simple Neural Population Codes
Schwab, David; Fiete, Ila
2015-03-01
A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.
High Precision Survey and Alignment of Large Linear Accelerators
Prenting, J
2004-01-01
For the future linear accelerator TESLA the demanded accuracy for the alignment of the components is 0.5 mm horizontal and 0.2 mm vertical, both on each 600 m section. Other accelerators require similar accuracies. These demands can not be fulfilled with open-air geodetic methods, mainly because of refraction. Therefore the RTRS (Rapid Tunnel Reference Surveyor), a measurement train performing overlapping multipoint alignment on a reference network is being developed. Two refraction-free realizations of this concept are being developed at the moment: the first one (GeLiS) measures the horizontal co-ordinates using stretched wires, combined with photogrammetric split-image sensors in a distance measurement configuration. In areas of the tunnel where the accelerator is following the earth curvature GeLiS measures the height using a new hydrostatic leveling system. The second concept (LiCAS) is based on laser straightness monitors (LSM) combined with frequency scanning interferometry (FSI) in an evacuated system...
Van Aert, S.; Chen, J.H.; Van Dyck, D.
2010-01-01
A widely used performance criterion in high-resolution transmission electron microscopy (HRTEM) is the information limit. It corresponds to the inverse of the maximum spatial object frequency that is linearly transmitted with sufficient intensity from the exit plane of the object to the image plane and is limited due to partial temporal coherence. In practice, the information limit is often measured from a diffractogram or from Young's fringes assuming a weak phase object scattering beyond the inverse of the information limit. However, for an aberration corrected electron microscope, with an information limit in the sub-angstrom range, weak phase objects are no longer applicable since they do not scatter sufficiently in this range. Therefore, one relies on more strongly scattering objects such as crystals of heavy atoms observed along a low index zone axis. In that case, dynamical scattering becomes important such that the non-linear and linear interaction may be equally important. The non-linear interaction may then set the experimental cut-off frequency observed in a diffractogram. The goal of this paper is to quantify both the linear and the non-linear information transfer in terms of closed form analytical expressions. Whereas the cut-off frequency set by the linear transfer can be directly related with the attainable resolution, information from the non-linear transfer can only be extracted using quantitative, model-based methods. In contrast to the historic definition of the information limit depending on microscope parameters only, the expressions derived in this paper explicitly incorporate their dependence on the structure parameters as well. In order to emphasize this dependence and to distinguish from the usual information limit, the expressions derived for the inverse cut-off frequencies will be referred to as the linear and non-linear structural information limit. The present findings confirm the well-known result that partial temporal coherence has
Design of a linear-motion dual-stage actuation system for precision control
Dong, W; Tang, J; ElDeeb, Y
2009-01-01
Actuators with high linear-motion speed, high positioning resolution and a long motion stroke are needed in many precision machining systems. In some current systems, voice coil motors (VCMs) are implemented for servo control. While the voice coil motors may provide the long motion stroke needed in many applications, the main obstacle that hinders the improvement of the machining accuracy and efficiency is their limited bandwidth. To fundamentally solve this issue, we propose to develop a dual-stage actuation system that consists of a voice coil motor that covers the coarse motion, and a piezoelectric stack actuator that induces the fine motion, thus enhancing the positioning accuracy. The focus of this present research is the mechatronics design and synthesis of the new actuation system. In particular, a flexure hinge based mechanism is developed to provide a motion guide and preload to the piezoelectric stack actuator that is serially connected to the voice coil motor. This mechanism is built upon parallel plane flexure hinges. A series of numerical and experimental studies are carried out to facilitate the system design and the model identification. The effectiveness of the proposed system is demonstrated through open-loop studies and preliminary closed-loop control practice. While the primary goal of this particular design is aimed at enhancing optical lens machining, the concept and approach outlined are generic and can be extended to a variety of applications
Positioning of the rf potential minimum line of a linear Paul trap with micrometer precision
Herskind, Peter Fønss; Dantan, Aurélien; Albert, Magnus
2009-01-01
We demonstrate a general technique to achieve a precise radial displacement of the nodal line of the radiofrequency (rf) field in a linear Paul trap. The technique relies on the selective adjustment of the load capacitance of the trap electrodes, achieved through the addition of capacitors...... to the basic resonant rf circuit used to drive the trap. Displacements of up to ~100 µm with micrometer precision are measured using a combination of fluorescence images of ion Coulomb crystals and coherent coupling of such crystals to a mode of an optical cavity. The displacements are made without measurable...
Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras.
Payne, Andrew D; Dorrington, Adrian A; Cree, Michael J; Carnegie, Dale A
2010-08-10
Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the system's response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).
voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.
Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K
2014-02-03
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.
Low-energy limit of the extended Linear Sigma Model
Divotgey, Florian [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Kovacs, Peter [Wigner Research Center for Physics, Hungarian Academy of Sciences, Institute for Particle and Nuclear Physics, Budapest (Hungary); GSI Helmholtzzentrum fuer Schwerionenforschung, ExtreMe Matter Institute, Darmstadt (Germany); Giacosa, Francesco [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Jan-Kochanowski University, Institute of Physics, Kielce (Poland); Rischke, Dirk H. [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); University of Science and Technology of China, Interdisciplinary Center for Theoretical Study and Department of Modern Physics, Hefei, Anhui (China)
2018-01-15
The extended Linear Sigma Model is an effective hadronic model based on the linear realization of chiral symmetry SU(N{sub f}){sub L} x SU(N{sub f}){sub R}, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the extended Linear Sigma Model (eLSM) for N{sub f} = flavors by integrating out all fields except for the pions, the (pseudo-)Nambu-Goldstone bosons of chiral symmetry breaking. The resulting low-energy effective action is identical to Chiral Perturbation Theory (ChPT) after choosing a representative for the coset space generated by chiral symmetry breaking and expanding it in powers of (derivatives of) the pion fields. The tree-level values of the coupling constants of the effective low-energy action agree remarkably well with those of ChPT. (orig.)
Stabilisation and precision pointing quadrupole magnets in the Compact Linear Collider (CLIC)
Janssens, Stef; Linde, Frank; van den Brand, Jo; Bertolini, Alessandro; Artoos, Kurt
This thesis describes the research done to provide stabilisation and precision positioning for the main beam quadrupole magnets of the Compact Linear Collider CLIC. The introduction describes why new particle accelerators are needed to further the knowledge of our universe and why they are linear. A proposed future accelerator is the Compact Linear Collider (CLIC) which consists of a novel two beam accelerator concept. Due to its linearity and subsequent single pass at the interaction point, this new accelerator requires a very small beam size at the interaction point, in order to increase collision effectiveness. One of the technological challenges, to obtain these small beam sizes at the interaction point, is to keep the quadrupole magnets aligned and stable to 1.5 nm integrated r.m.s. in vertical and 5 nm integrated root mean square (r.m.s.) in lateral direction. Additionally there is a proposal to create an intentional offset (max. 50 nm every 20 ms with a precision of +/- 1 nm), for several quadrupole ma...
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles.
Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J
2017-09-29
The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.
From linear optical quantum computing to Heisenberg-limited interferometry
Lee, Hwang; Kok, Pieter; Williams, Colin P; Dowling, Jonathan P
2004-01-01
The working principles of linear optical quantum computing are based on photodetection, namely, projective measurements. The use of photodetection can provide efficient nonlinear interactions between photons at the single-photon level, which is technically problematic otherwise. We report an application of such a technique to prepare quantum correlations as an important resource for Heisenberg-limited optical interferometry, where the sensitivity of phase measurements can be improved beyond the usual shot-noise limit. Furthermore, using such nonlinearities, optical quantum non-demolition measurements can now be carried out easily at the single-photon level
Dongxu Ren
2016-04-01
Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.
Regularized semiclassical limits: Linear flows with infinite Lyapunov exponents
Athanassoulis, Agissilaos; Katsaounis, Theodoros; Kyza, Irene
2016-01-01
Semiclassical asymptotics for Schrödinger equations with non-smooth potentials give rise to ill-posed formal semiclassical limits. These problems have attracted a lot of attention in the last few years, as a proxy for the treatment of eigenvalue crossings, i.e. general systems. It has recently been shown that the semiclassical limit for conical singularities is in fact well-posed, as long as the Wigner measure (WM) stays away from singular saddle points. In this work we develop a family of refined semiclassical estimates, and use them to derive regularized transport equations for saddle points with infinite Lyapunov exponents, extending the aforementioned recent results. In the process we answer a related question posed by P.L. Lions and T. Paul in 1993. If we consider more singular potentials, our rigorous estimates break down. To investigate whether conical saddle points, such as -|x|, admit a regularized transport asymptotic approximation, we employ a numerical solver based on posteriori error control. Thus rigorous upper bounds for the asymptotic error in concrete problems are generated. In particular, specific phenomena which render invalid any regularized transport for -|x| are identified and quantified. In that sense our rigorous results are sharp. Finally, we use our findings to formulate a precise conjecture for the condition under which conical saddle points admit a regularized transport solution for the WM. © 2016 International Press.
Regularized semiclassical limits: Linear flows with infinite Lyapunov exponents
Athanassoulis, Agissilaos
2016-08-30
Semiclassical asymptotics for Schrödinger equations with non-smooth potentials give rise to ill-posed formal semiclassical limits. These problems have attracted a lot of attention in the last few years, as a proxy for the treatment of eigenvalue crossings, i.e. general systems. It has recently been shown that the semiclassical limit for conical singularities is in fact well-posed, as long as the Wigner measure (WM) stays away from singular saddle points. In this work we develop a family of refined semiclassical estimates, and use them to derive regularized transport equations for saddle points with infinite Lyapunov exponents, extending the aforementioned recent results. In the process we answer a related question posed by P.L. Lions and T. Paul in 1993. If we consider more singular potentials, our rigorous estimates break down. To investigate whether conical saddle points, such as -|x|, admit a regularized transport asymptotic approximation, we employ a numerical solver based on posteriori error control. Thus rigorous upper bounds for the asymptotic error in concrete problems are generated. In particular, specific phenomena which render invalid any regularized transport for -|x| are identified and quantified. In that sense our rigorous results are sharp. Finally, we use our findings to formulate a precise conjecture for the condition under which conical saddle points admit a regularized transport solution for the WM. © 2016 International Press.
Improvements in RIMS Isotopic Precision: Application to in situ atom-limited analyses
Levine, J.; Stephan, T.; Savina, M.; Pellin, M.
2009-01-01
Resonance ionization mass spectrometry offers high sensitivity and elemental selectivity in microanalysis, but the isotopic precision attainable by this technique has been limited. Here we report instrumental modifications to improve the precision of RIMS isotope ratio measurements. Special attention must be paid to eliminating pulse-to-pulse variations in the time-of-flight mass spectrometer through which the photoions travel, and resonant excitation schemes must be chosen such that the resonance transitions can substantially power-broadened to cover the isotope shifts. We report resonance ionization measurements of chromium isotope ratios with statistics-limited precision better than 1%.
Linear perspective limitations on virtual reality and realistic displays
Temme, Leonard A.
2007-04-01
The visual images of the natural world, with their immediate intuitive appeal, seem like the logical gold standard for evaluating displays. After all, since photorealistic displays look so increasingly like the real world, what could be better? Part of the shortcoming of this intuitive appeal for displays is its naivete. Realism itself is full of potential illusions that we do not notice because, most of the time, realism is good enough for our everyday tasks. But when confronted with tasks that go beyond those for which our visual system has evolved, we may be blindsided. If we survive, blind to our erroneous perceptions and oblivious to our good fortune at having survived, we will not be any wiser next time. Realist displays depend on linear perspective (LP), the mathematical mapping of three dimensions onto two. Despite the fact that LP is a seductively elegant system that predicts results with defined mathematical procedures, artists do not stick to the procedures, not because they are math-phobic but because LP procedures, if followed explicitly, produce ugly, limited, and distorted images. If artists bother with formal LP procedures at all, they invariably temper the renderings by eye. The present paper discusses LP assumptions, limitations, and distortions. It provides examples of kluges to cover some of these LP shortcomings. It is important to consider the limitations of LP so that we do not let either naive assumptions or the seductive power of LP guide our thinking or expectations unrealistically as we consider its possible uses in advanced visual displays.
Loss-induced limits to phase measurement precision with maximally entangled states
Rubin, Mark A.; Kaushik, Sumanth
2007-01-01
The presence of loss limits the precision of an approach to phase measurement using maximally entangled states, also referred to as NOON states. A calculation using a simple beam-splitter model of loss shows that, for all nonzero values L of the loss, phase measurement precision degrades with increasing number N of entangled photons for N sufficiently large. For L above a critical value of approximately 0.785, phase measurement precision degrades with increasing N for all values of N. For L near zero, phase measurement precision improves with increasing N down to a limiting precision of approximately 1.018L radians, attained at N approximately equal to 2.218/L, and degrades as N increases beyond this value. Phase measurement precision with multiple measurements and a fixed total number of photons N T is also examined. For L above a critical value of approximately 0.586, the ratio of phase measurement precision attainable with NOON states to that attainable by conventional methods using unentangled coherent states degrades with increasing N, the number of entangled photons employed in a single measurement, for all values of N. For L near zero this ratio is optimized by using approximately N=1.279/L entangled photons in each measurement, yielding a precision of approximately 1.340√(L/N T ) radians
Lyerly Herbert K
2008-03-01
Full Text Available Abstract Background Single-cell assays of immune function are increasingly used to monitor T cell responses in immunotherapy clinical trials. Standardization and validation of such assays are therefore important to interpretation of the clinical trial data. Here we assess the levels of intra-assay, inter-assay, and inter-operator precision, as well as linearity, of CD8+ T cell IFNγ-based ELISPOT and cytokine flow cytometry (CFC, as well as tetramer assays. Results Precision was measured in cryopreserved PBMC with a low, medium, or high response level to a CMV pp65 peptide or peptide mixture. Intra-assay precision was assessed using 6 replicates per assay; inter-assay precision was assessed by performing 8 assays on different days; and inter-operator precision was assessed using 3 different operators working on the same day. Percent CV values ranged from 4% to 133% depending upon the assay and response level. Linearity was measured by diluting PBMC from a high responder into PBMC from a non-responder, and yielded R2 values from 0.85 to 0.99 depending upon the assay and antigen. Conclusion These data provide target values for precision and linearity of single-cell assays for those wishing to validate these assays in their own laboratories. They also allow for comparison of the precision and linearity of ELISPOT, CFC, and tetramer across a range of response levels. There was a trend toward tetramer assays showing the highest precision, followed closely by CFC, and then ELISPOT; while all three assays had similar linearity. These findings are contingent upon the use of optimized protocols for each assay.
Effect of the new carbon fiber board of Elekta Precise linear accelerator on the radiation dose
Gan Jiaying; Hu Yinxiang; Luo Yuanqiang; Hong Wei; Wang Zhiyong; Lu Bing; Jin Feng
2012-01-01
Objective: To investigate the dosimetric influence of pure carbon fiber treatment tabletop of Elekta Precise new linear accelerator in radiotherapy. Methods: Surface-axis distance (SAD) technology was employed for the measurement. Two groups of fields were set and both of them were SAD opposed portals (one of them went through the tabletop,while the other did not). A PTW electrometer and a 0.6 cm 3 Farmer ionization chamber were utilized for comparison measurement. Then dose attenuation of the main table board, extended body board, the extended board for head, neck and shoulders, and the joints of these boards were calculated. Results: Under the energy of 6 MV,the dose attenuations of the following locations were: 1.4% - 7.2% at the main treatment table board; 2.8% - 38.7%, 1.4% -30.1%, 1.5% -20.8% and 1.4% - 11.2%, respectively at distances of 1, 4, 7 and 8 cm from the joint of the main table board; 0.5% - 5.0% at the extended body board; 4.7% - 15.4% at distance of 1 cm from the joint of the extended body board; 0.5% -3.3% at the neck position of the extended board for head, neck and shoulders; 5.3% - 16.7% at the shoulder positions; and 6.8% -30.4% at the joint between the extended boards and the main table board. Conclusions: The dose attenuations of the new linear accelerator pure carbon fiber treatment tabletop vary at different locations. Considerable higher attenuations are observed at the table board joints than other locations. (authors)
LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor
Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram
2007-09-01
Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.
Precise numerical results for limit cycles in the quantum three-body problem
Mohr, R.F.; Furnstahl, R.J.; Hammer, H.-W.; Perry, R.J.; Wilson, K.G.
2006-01-01
The study of the three-body problem with short-range attractive two-body forces has a rich history going back to the 1930s. Recent applications of effective field theory methods to atomic and nuclear physics have produced a much improved understanding of this problem, and we elucidate some of the issues using renormalization group ideas applied to precise nonperturbative calculations. These calculations provide 11-12 digits of precision for the binding energies in the infinite cutoff limit. The method starts with this limit as an approximation to an effective theory and allows cutoff dependence to be systematically computed as an expansion in powers of inverse cutoffs and logarithms of the cutoff. Renormalization of three-body bound states requires a short range three-body interaction, with a coupling that is governed by a precisely mapped limit cycle of the renormalization group. Additional three-body irrelevant interactions must be determined to control subleading dependence on the cutoff and this control is essential for an effective field theory since the continuum limit is not likely to match physical systems (e.g., few-nucleon bound and scattering states at low energy). Leading order calculations precise to 11-12 digits allow clear identification of subleading corrections, but these corrections have not been computed
Employing Theories Far beyond Their Limits - Linear Dichroism Theory.
Mayerhöfer, Thomas G
2018-05-15
Using linear polarized light, it is possible in case of ordered structures, such as stretched polymers or single crystals, to determine the orientation of the transition moments of electronic and vibrational transitions. This not only helps to resolve overlapping bands, but also assigning the symmetry species of the transitions and to elucidate the structure. To perform spectral evaluation quantitatively, a sometimes "Linear Dichroism Theory" called approach is very often used. This approach links the relative orientation of the transition moment and polarization direction to the quantity absorbance. This linkage is highly questionable for several reasons. First of all, absorbance is a quantity that is by its definition not compatible with Maxwell's equations. Furthermore, absorbance seems not to be the quantity which is generally compatible with linear dichroism theory. In addition, linear dichroism theory disregards that it is not only the angle between transition moment and polarization direction, but also the angle between sample surface and transition moment, that influences band shape and intensity. Accordingly, the often invoked "magic angle" has never existed and the orientation distribution influences spectra to a much higher degree than if linear dichroism theory would hold strictly. A last point that is completely ignored by linear dichroism theory is the fact that partially oriented or randomly-oriented samples usually consist of ordered domains. It is their size relative to the wavelength of light that can also greatly influence a spectrum. All these findings can help to elucidate orientation to a much higher degree by optical methods than currently thought possible by the users of linear dichroism theory. Hence, it is the goal of this contribution to point out these shortcomings of linear dichroism theory to its users to stimulate efforts to overcome the long-lasting stagnation of this important field. © 2018 Wiley-VCH Verlag GmbH & Co. KGa
Kimura, Akihide; Gao, Wei; Lijiang, Zeng
2010-01-01
This paper presents measurement of the X-directional position and the Z-directional out-of-straightness of a precision linear air-bearing stage with a two-degree-of-freedom (two-DOF) linear encoder, which is an optical displacement sensor for simultaneous measurement of the two-DOF displacements. The two-DOF linear encoder is composed of a reflective-type one-axis scale grating and an optical sensor head. A reference grating is placed perpendicular to the scale grating in the optical sensor head. Two-DOF displacements can be obtained from interference signals generated by the ±1 order diffracted beams from two gratings. A prototype two-DOF linear encoder employing the scale grating with the grating period of approximately 1.67 µm measured the X-directional position and the Z-directional out-of-straightness of the linear air-bearing stage
An electron beam linear scanning mode for industrial limited-angle nano-computed tomography
Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng
2018-01-01
Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.
Lo, Ching F.
1999-01-01
The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.
Trace element analysis by EPMA in geosciences: detection limit, precision and accuracy
Batanova, V. G.; Sobolev, A. V.; Magnin, V.
2018-01-01
Use of the electron probe microanalyser (EPMA) for trace element analysis has increased over the last decade, mainly because of improved stability of spectrometers and the electron column when operated at high probe current; development of new large-area crystal monochromators and ultra-high count rate spectrometers; full integration of energy-dispersive / wavelength-dispersive X-ray spectrometry (EDS/WDS) signals; and the development of powerful software packages. For phases that are stable under a dense electron beam, the detection limit and precision can be decreased to the ppm level by using high acceleration voltage and beam current combined with long counting time. Data on 10 elements (Na, Al, P, Ca, Ti, Cr, Mn, Co, Ni, Zn) in olivine obtained on a JEOL JXA-8230 microprobe with tungsten filament show that the detection limit decreases proportionally to the square root of counting time and probe current. For all elements equal or heavier than phosphorus (Z = 15), the detection limit decreases with increasing accelerating voltage. The analytical precision for minor and trace elements analysed in olivine at 25 kV accelerating voltage and 900 nA beam current is 4 - 18 ppm (2 standard deviations of repeated measurements of the olivine reference sample) and is similar to the detection limit of corresponding elements. To analyse trace elements accurately requires careful estimation of background, and consideration of sample damage under the beam and secondary fluorescence from phase boundaries. The development and use of matrix reference samples with well-characterised trace elements of interest is important for monitoring and improving of the accuracy. An evaluation of the accuracy of trace element analyses in olivine has been made by comparing EPMA data for new reference samples with data obtained by different in-situ and bulk analytical methods in six different laboratories worldwide. For all elements, the measured concentrations in the olivine reference sample
Dayananda, S.; Kinhikar, R.A.; Saju, Sherley; Deshpande, D.D.; Jalali, R.; Sarin, R.; Shrivastava, S.K.; Dinshaw, K.A.
2003-01-01
Stereotactic Radiosurgery (SRS) is an advancement on precision radiotherapy, in which stereo tactically guided localized high dose is delivered to the lesion (target) in a single fraction, while sparing the surrounding normal tissue. Radiosurgery has been used to treat variety of benign and malignant lesions as well as functional disorders in brain such as arteriovenous malformation (AVM), acoustic neuroma, solitary primary brain tumor, single metastasis, pituitary adenoma etc
Aeroelastic Limit-Cycle Oscillations resulting from Aerodynamic Non-Linearities
van Rooij, A.C.L.M.
2017-01-01
Aerodynamic non-linearities, such as shock waves, boundary layer separation or boundary layer transition, may cause an amplitude limitation of the oscillations induced by the fluid flow around a structure. These aeroelastic limit-cycle oscillations (LCOs) resulting from aerodynamic non-linearities
Lu Li; Yang Yiren
2009-01-01
The responses and limit cycle flutter of a plate-type structure with cubic stiffness in viscous flow were studied. The continuous system was dispersed by utilizing Galerkin Method. The equivalent linearization concept was performed to predict the ranges of limit cycle flutter velocities. The coupled map of flutter amplitude-equivalent linear stiffness-critical velocity was used to analyze the stability of limit cycle flutter. The theoretical results agree well with the results of numerical integration, which indicates that the equivalent linearization concept is available to the analysis of limit cycle flutter of plate-type structure. (authors)
High Precision Piezoelectric Linear Motors for Operations at Cryogenic Temperatures and Vacuum
Wong, D.; Carman, G.; Stam, M.; Bar-Cohen, Y.; Sen, A.; Henry, P.; Bearman, G.; Moacanin, J.
1995-01-01
The Jet Propulsion Laboratory evaluated the use of an electromechanical device for optically positioning a mirror system during the pre-project phase of the Pluto-Fast-Flyby (PFF) mission. The device under consideration was a piezoelectric driven linear motor functionally dependent upon a time varying electric field which induces displacements ranging from submicrons to millimeters with positioning accuracy within nanometers. Using a control package, the mirror system provides image motion compensation and mosaicking capabilities. While this device offers unique advantages, there were concerns pertaining to its operational capabilities for the PFF mission. The issues include irradiation effects and thermal concerns. A literature study indicated that irradiation effects will not significantly impact the linear motor's operational characteristics. On the other hand, thermal concerns necessitated an in depth study.
A METHOD FOR SELF-CALIBRATION IN SATELLITE WITH HIGH PRECISION OF SPACE LINEAR ARRAY CAMERA
W. Liu
2016-06-01
Full Text Available At present, the on-orbit calibration of the geometric parameters of a space surveying camera is usually processed by data from a ground calibration field after capturing the images. The entire process is very complicated and lengthy and cannot monitor and calibrate the geometric parameters in real time. On the basis of a large number of on-orbit calibrations, we found that owing to the influence of many factors, e.g., weather, it is often difficult to capture images of the ground calibration field. Thus, regular calibration using field data cannot be ensured. This article proposes a real time self-calibration method for a space linear array camera on a satellite using the optical auto collimation principle. A collimating light source and small matrix array CCD devices are installed inside the load system of the satellite; these use the same light path as the linear array camera. We can extract the location changes of the cross marks in the matrix array CCD to determine the real-time variations in the focal length and angle parameters of the linear array camera. The on-orbit status of the camera is rapidly obtained using this method. On one hand, the camera’s change regulation can be mastered accurately and the camera’s attitude can be adjusted in a timely manner to ensure optimal photography; in contrast, self-calibration of the camera aboard the satellite can be realized quickly, which improves the efficiency and reliability of photogrammetric processing.
Influence of a high vacuum on the precise positioning using an ultrasonic linear motor.
Kim, Wan-Soo; Lee, Dong-Jin; Lee, Sun-Kyu
2011-01-01
This paper presents an investigation of the ultrasonic linear motor stage for use in a high vacuum environment. The slider table is driven by the hybrid bolt-clamped Langevin-type ultrasonic linear motor, which is excited with its different modes of natural frequencies in both lateral and longitudinal directions. In general, the friction behavior in a vacuum environment becomes different from that in an environment of atmospheric pressure and this difference significantly affects the performance of the ultrasonic linear motor. In this paper, to consistently provide stable and high power of output in a high vacuum, frequency matching was conducted. Moreover, to achieve the fine control performance in the vacuum environment, a modified nominal characteristic trajectory following control method was adopted. Finally, the stage was operated under high vacuum condition, and the operating performances were investigated compared with that of a conventional PI compensator. As a result, robustness of positioning was accomplished in a high vacuum condition with nanometer-level accuracy.
Influence of a high vacuum on the precise positioning using an ultrasonic linear motor
Kim, Wan-Soo; Lee, Dong-Jin; Lee, Sun-Kyu
2011-01-01
This paper presents an investigation of the ultrasonic linear motor stage for use in a high vacuum environment. The slider table is driven by the hybrid bolt-clamped Langevin-type ultrasonic linear motor, which is excited with its different modes of natural frequencies in both lateral and longitudinal directions. In general, the friction behavior in a vacuum environment becomes different from that in an environment of atmospheric pressure and this difference significantly affects the performance of the ultrasonic linear motor. In this paper, to consistently provide stable and high power of output in a high vacuum, frequency matching was conducted. Moreover, to achieve the fine control performance in the vacuum environment, a modified nominal characteristic trajectory following control method was adopted. Finally, the stage was operated under high vacuum condition, and the operating performances were investigated compared with that of a conventional PI compensator. As a result, robustness of positioning was accomplished in a high vacuum condition with nanometer-level accuracy.
Precise and fast beam energy measurement at the international linear collider
Viti, Michele
2010-02-01
The international Linear Collider (ILC) is an electron-positron collider with a center-of-mass energy between 200 and 500 GeV and a peak luminosity of 2 . 10 34 cm -2 s -1 . For the physics program at this machine, an excellent bunch-by-bunch control of the beam energy is mandatory. Several techniques are foreseen to be implemented at the ILC in order to achieve this request. Energy spectrometers upstream and downstream of the electron/positron interaction point were proposed and the present default option for the upstream spectrometer is a beam position monitor based (BPM-based) spectrometer. In 2006/2007, a prototype of such a device was commissioned at the End Station A beam line at the Stanford Linear Accelerator Center (SLAC) in order to study performance and reliability. In addition, a novel method based on laser Compton backscattering has been proposed, since as proved at the Large Electron-Positron Collider (LEP) and the Stanford Linear Collider (SLC), complementary methods are necessary to cross-check the results of the BPM-based spectrometer. In this thesis, an overview of the experiment at End Station A is given, with emphasis on the performance of the magnets in the chicane and first energy resolution estimations. Also, the novel Compton backscattering method is discussed in details and found to be very promising. It has the potential to bring the beam energy resolution well below the requirement of ΔE b /E b =10 -4 . (orig.)
Ru, Changhai; Chen, Liguo; Shao, Bing; Rong, Weibin; Sun, Lining
2008-01-01
Piezoelectric actuators have traditionally been driven by voltage amplifiers. When driven at large voltages these actuators exhibit a significant amount of distortion, known as hysteresis, which may reduce the stability robustness of the system in feedback control applications. Piezoelectric transducers are known to exhibit less hysteresis when driven with current or charge rather than voltage. Despite this advantage, such methods have found little practical application due to the poor low frequency response of present current and charge driver designs. In this paper, a new piezoelectric amplifier based on current switching is presented which can reduce hysteresis. Special circuits and a hybrid control algorithm realize quick and precise positioning. Experimental results demonstrate that the amplifier can be used for dynamic and static applications and low frequency bandwidths can also be achieved
Precise and fast beam energy measurement at the international linear collider
Viti, Michele
2010-02-15
The international Linear Collider (ILC) is an electron-positron collider with a center-of-mass energy between 200 and 500 GeV and a peak luminosity of 2 . 10{sup 34} cm{sup -2}s{sup -1}. For the physics program at this machine, an excellent bunch-by-bunch control of the beam energy is mandatory. Several techniques are foreseen to be implemented at the ILC in order to achieve this request. Energy spectrometers upstream and downstream of the electron/positron interaction point were proposed and the present default option for the upstream spectrometer is a beam position monitor based (BPM-based) spectrometer. In 2006/2007, a prototype of such a device was commissioned at the End Station A beam line at the Stanford Linear Accelerator Center (SLAC) in order to study performance and reliability. In addition, a novel method based on laser Compton backscattering has been proposed, since as proved at the Large Electron-Positron Collider (LEP) and the Stanford Linear Collider (SLC), complementary methods are necessary to cross-check the results of the BPM-based spectrometer. In this thesis, an overview of the experiment at End Station A is given, with emphasis on the performance of the magnets in the chicane and first energy resolution estimations. Also, the novel Compton backscattering method is discussed in details and found to be very promising. It has the potential to bring the beam energy resolution well below the requirement of {delta}E{sub b}/E{sub b}=10{sup -4}. (orig.)
Precision, accuracy and linearity of radiometer EML 105 whole blood metabolite biosensors.
Cobbaert, C; Morales, C; van Fessem, M; Kemperman, H
1999-11-01
The analytical performance of a new, whole blood glucose and lactate electrode system (EML 105 analyser. Radiometer Medical A/S. Copenhagen, Denmark) was evaluated. Between-day coefficients of variation were glucose and lactate, respectively. Recoveries of glucose were 100 +/- 10% using either aqueous or protein-based standards. Recoveries of lactate depended on the matrix, being underestimated in aqueous standards (approximately -10%) and 95-100% in standards containing 40 g/L albumin at lactate concentrations of 15 and 30 mmol/L. However, recoveries were high (up to 180%) at low lactate concentrations in protein-based standards. Carry-over, investigated according to National Clinical Chemistry Laboratory Standards EP10-T2, was negligible (alpha = 0.01). Glucose and lactate biosensors equipped with new membranes were linear up to 60 and 30 mmol/L, respectively. However, linearity fell upon daily use with increasing membrane lifetime. We conclude that the Radiometer metabolite biosensor results are reproducible and do not suffer from specimen-related carry-over. However, lactate recovery depends on the protein content and the lactate concentration.
Observation of a current-limited double layer in a linear turbulent-heating device
Inuzuka, H.; Torii, Y.; Nagatsu, M.; Tsukishima, T.
1985-01-01
Time- and space-resolved measurements of strong double layers (DLs) have been carried out for the first time on a linear turbulent-heating device, together with those of fluctuation spectra and precise current measurements. A stable stong DL is formed even when the electric current through the DL is less than the so-called Bohm value. Discussion of the formation and decay processes is given, indicating a transition from an ion-acoustic DL to a monotonic DL
Chaaba, Ali; Aboussaleh, Mohamed; Bousshine, Lahbib; Boudaia, El Hassan
2011-01-01
Limit analysis approaches are widely used to deal with metalworking processes analysis; however, they are applied only for perfectly plastic materials and recently for isotropic hardening ones excluding any kind of kinematic hardening. In the present work, using Implicit Standard Materials concept, sequential limit analysis approach and the finite element method, our objective consists in extending the limit analysis application for including linear and non linear kinematic strain hardenings. Because this plastic flow rule is non associative, the Implicit Standard Materials concept is adopted as a framework of non standard plasticity modeling. The sequential limit analysis procedure which considers the plastic behavior with non linear kinematic strain hardening as a succession of perfectly plastic behavior with yielding surfaces updated after each sequence of limit analysis and geometry updating is applied. Standard kinematic finite element method together with a regularization approach is used for performing two large compression cases (cold forging) in plane strain and axisymmetric conditions
Limitations on the precision of 238U/235U measurements and implications for environmental monitoring
Russ III, G.P.
1997-01-01
The ability to determine the isotopic composition of uranium in environmental samples is an important component of the International Atomic Energy Agency's (IAEA) safeguards program, and variations in the isotopic ratio 238 U/ 235 U provide the most direct evidence of isotopic enrichment activities. The interpretation of observed variations in 238 U/ 235 U depends on the ability to distinguish enrichment from instrumental biases and any variations occurring in the environment but not related to enrichment activities. Instrumental biases that have historically limited the accuracy of 238 U/ 235 U determinations can be eliminated by the use of the 233 U/ 236 U double-spike technique. With this technique, it is possible to determine the 238 U/ 235 U in samples to an accuracy equal to the precision of the measurement, ca. 0.1% for a few 10's of nanograms of uranium. Given an accurate determination of 238 U/ 235 U, positive identification of enrichment activities depends on the observed value being outside the range of 238 U/ 235 U's expected as a result of natural or environmental variations. Analyses of a suite of soil samples showed no variation beyond 0.2% in 238 U/ 235 U
Wu, Bing; Zhao, Yinghe; Nan, Haiyan; Yang, Ziyi; Zhang, Yuhan; Zhao, Huijuan; He, Daowei; Jiang, Zonglin; Liu, Xiaolong; Li, Yun; Shi, Yi; Ni, Zhenhua; Wang, Jinlan; Xu, Jian-Bin; Wang, Xinran
2016-06-08
Precise assembly of semiconductor heterojunctions is the key to realize many optoelectronic devices. By exploiting the strong and tunable van der Waals (vdW) forces between graphene and organic small molecules, we demonstrate layer-by-layer epitaxy of ultrathin organic semiconductors and heterostructures with unprecedented precision with well-defined number of layers and self-limited characteristics. We further demonstrate organic p-n heterojunctions with molecularly flat interface, which exhibit excellent rectifying behavior and photovoltaic responses. The self-limited organic molecular beam epitaxy (SLOMBE) is generically applicable for many layered small-molecule semiconductors and may lead to advanced organic optoelectronic devices beyond bulk heterojunctions.
Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago
2018-07-12
The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.
Resolution limits of migration and linearized waveform inversion images in a lossy medium
Schuster, Gerard T.; Dutta, Gaurav; Li, Jing
2017-01-01
The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.
Resolution limits of migration and linearized waveform inversion images in a lossy medium
Schuster, Gerard T.
2017-03-10
The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.
Limiting precision in differential equation solvers. II Sources of trouble and starting a code
Shampine, L.F.
1978-01-01
The reasons a class of codes for solving ordinary differential equations might want to use an extremely small step size are investigated. For this class the likelihood of precision difficulties is evaluated and remedies examined. The investigations suggests a way of selecting automatically an initial step size which should be reliably on scale
A precise measurement of the left-right asymmetry of Z Boson production at the SLAC linear collider
1994-09-01
We present a precise measurement of the left-right cross section asymmetry of Z boson production (A LR ) observed in 1993 data at the SLAC linear collider. The A LR experiment provides a direct measure of the effective weak mixing angle through the initial state couplings of the electron to the Z. During the 1993 run of the SLC, the SLD detector recorded 49,392 Z events produced by the collision of longitudinally polarized electrons on unpolarized positrons at a center-of-mass energy of 91.26 GeV. A Compton polarimeter measured the luminosity-weighted electron polarization to be (63.4±1.3)%. ALR was measured to be 0.1617±0.0071(stat.)±0.0033(syst.), which determines the effective weak mixing angle to be sin 2 θ W eff = 0.2292±0.0009(stat.)±0.0004(syst.). This measurement of A LR is incompatible at the level of two standard deviations with the value predicted by a fit of several other electroweak measurements to the Standard Model
Yu, Xiangzhi; Gillmer, Steven R.; Woody, Shane C.; Ellis, Jonathan D.
2016-01-01
A compact, fiber-coupled, six degree-of-freedom measurement system which enables fast, accurate calibration, and error mapping of precision linear stages is presented. The novel design has the advantages of simplicity, compactness, and relatively low cost. This proposed sensor can simultaneously measure displacement, two straightness errors, and changes in pitch, yaw, and roll using a single optical beam traveling between the measurement system and a small target. The optical configuration of the system and the working principle for all degrees-of-freedom are presented along with the influence and compensation of crosstalk motions in roll and straightness measurements. Several comparison experiments are conducted to investigate the feasibility and performance of the proposed system in each degree-of-freedom independently. Comparison experiments to a commercial interferometer demonstrate error standard deviations of 0.33 μm in straightness, 0.14 μrad in pitch, 0.44 μradin yaw, and 45.8 μrad in roll.
Yu, Xiangzhi, E-mail: xiangzhi.yu@rochester.edu; Gillmer, Steven R. [Department of Mechanical Engineering, University of Rochester, Rochester, New York 14627 (United States); Woody, Shane C. [InSituTec Incorporated, 7140 Weddington Road, Concord, North Carolina 28027 (United States); Ellis, Jonathan D. [Department of Mechanical Engineering, University of Rochester, Rochester, New York 14627 (United States); The Institute of Optics, University of Rochester, Rochester, New York 14627 (United States)
2016-06-15
A compact, fiber-coupled, six degree-of-freedom measurement system which enables fast, accurate calibration, and error mapping of precision linear stages is presented. The novel design has the advantages of simplicity, compactness, and relatively low cost. This proposed sensor can simultaneously measure displacement, two straightness errors, and changes in pitch, yaw, and roll using a single optical beam traveling between the measurement system and a small target. The optical configuration of the system and the working principle for all degrees-of-freedom are presented along with the influence and compensation of crosstalk motions in roll and straightness measurements. Several comparison experiments are conducted to investigate the feasibility and performance of the proposed system in each degree-of-freedom independently. Comparison experiments to a commercial interferometer demonstrate error standard deviations of 0.33 μm in straightness, 0.14 μrad in pitch, 0.44 μradin yaw, and 45.8 μrad in roll.
A precise measurement of the left-right asymmetry of Z Boson production at the SLAC linear collider
NONE
1994-09-01
We present a precise measurement of the left-right cross section asymmetry of Z boson production (A{sub LR}) observed in 1993 data at the SLAC linear collider. The A{sub LR} experiment provides a direct measure of the effective weak mixing angle through the initial state couplings of the electron to the Z. During the 1993 run of the SLC, the SLD detector recorded 49,392 Z events produced by the collision of longitudinally polarized electrons on unpolarized positrons at a center-of-mass energy of 91.26 GeV. A Compton polarimeter measured the luminosity-weighted electron polarization to be (63.4{+-}1.3)%. ALR was measured to be 0.1617{+-}0.0071(stat.){+-}0.0033(syst.), which determines the effective weak mixing angle to be sin {sup 2}{theta}{sub W}{sup eff} = 0.2292{+-}0.0009(stat.){+-}0.0004(syst.). This measurement of A{sub LR} is incompatible at the level of two standard deviations with the value predicted by a fit of several other electroweak measurements to the Standard Model.
Moortgat-Pick, Gudrid
2010-12-01
The main goal of new physics searches at a future Linear Collider is the precise determination of the underlying new physics model. The physics potential of the ILC as well as the multi-TeV option collider CLIC have to be optimized with regard to expected results from the LHC. The exploitation of spin effects plays a crucial role in this regard. After a short status report of the Linear Collider design and physics requirements, this article explains fundamentals in polarization and provides an overview of the impact of these spin effects in electroweak precision physics. (orig.)
Size effects in non-linear heat conduction with flux-limited behaviors
Li, Shu-Nan; Cao, Bing-Yang
2017-11-01
Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.
Haslinger, Jaroslav; Repin, S.; Sysala, Stanislav
2016-01-01
Roč. 61, č. 5 (2016), s. 527-564 ISSN 0862-7940 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : functionals with linear growth * limit load * truncation method * perfect plasticity Subject RIV: BA - General Mathematics Impact factor: 0.618, year: 2016 http://link.springer.com/article/10.1007/s10492-016-0146-6
On the zero mass limit of the non linear sigma model in four dimensions
Gomes, M.; Koeberle, R.
The existence of the zero mass limit for the non-linear sigma-model in four dimensions is shown to all orders in renormalized perturbation theory. The main ingredient in the proof is the imposition of many current axial vector Ward identities and the tool used is Lowenstein's momentum-space subtraction procedure. Instead of introducing anisotropic symmetry breaking mass terms, which do not vanish in the symmetry limit, it is necessary to allow for 'soft' anisotropic derivative coupling in order to obtain the correct Ward indentities [pt
Bird's IP view of limits of conventional e+e- linear collider technology
Irwin, J.
1994-11-01
Scaling laws appropriate to future e + e - linear colliders in the high upsilon regime are examined assuming that the luminosity must increase as the square of the energy. Limits on achievable energy for these colliders are identified under the assumption that no exotica such as energy recovery, superdisruption, or four-beam charge compensation are employed, and all technology is foreseeable and has an apparent cost within the bounds of a large international collaboration. Following these guidelines, an upper energy limit appears around 15 TeV in the center of mass as the normalized emittance required to produce ever smaller vertical spot sizes become unattainable with conventional damping ring technology
Juste, B.; Miro, R.; Verdu, G.; Diez, S.; Campayo, J. M.
2015-07-01
Monte Carlo estimation of the giant-dipole-resonance (GRN) photoneutrons insider the Elekta Precise Linac head (emitting a 15 MV photon beam) were performed using the MCNP6 code. Each component of Linac head geometry and materials were modelled in detail using the given manufacturer information. Primary photons generate photoneutrons and its transport across the treatment head was simulated, including the (n, γ) reactions which undergo activation products. The MCNP6 was used to develop a method for quantifying the activation of accelerator components. The approach described in this paper is useful in quantifying the origin and the amount of nuclear activation. (Author)
Towards the Fundamental Quantum Limit of Linear Measurements of Classical Signals.
Miao, Haixing; Adhikari, Rana X; Ma, Yiqiu; Pang, Belinda; Chen, Yanbei
2017-08-04
The quantum Cramér-Rao bound (QCRB) sets a fundamental limit for the measurement of classical signals with detectors operating in the quantum regime. Using linear-response theory and the Heisenberg uncertainty relation, we derive a general condition for achieving such a fundamental limit. When applied to classical displacement measurements with a test mass, this condition leads to an explicit connection between the QCRB and the standard quantum limit that arises from a tradeoff between the measurement imprecision and quantum backaction; the QCRB can be viewed as an outcome of a quantum nondemolition measurement with the backaction evaded. Additionally, we show that the test mass is more a resource for improving measurement sensitivity than a victim of the quantum backaction, which suggests a new approach to enhancing the sensitivity of a broad class of sensors. We illustrate these points with laser interferometric gravitational-wave detectors.
Predictive inference for best linear combination of biomarkers subject to limits of detection.
Coolen-Maturi, Tahani
2017-08-15
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve is a useful tool to assess the ability of a diagnostic test to discriminate between two classes or groups. In practice, multiple diagnostic tests or biomarkers are combined to improve diagnostic accuracy. Often, biomarker measurements are undetectable either below or above the so-called limits of detection (LoD). In this paper, nonparametric predictive inference (NPI) for best linear combination of two or more biomarkers subject to limits of detection is presented. NPI is a frequentist statistical method that is explicitly aimed at using few modelling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. The NPI lower and upper bounds for the ROC curve subject to limits of detection are derived, where the objective function to maximize is the area under the ROC curve. In addition, the paper discusses the effect of restriction on the linear combination's coefficients on the analysis. Examples are provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
First-wall design limitations for linear magnetic fusion (LMF) reactors
Gryczkowski, G.E.; Krakowski, R.A.; Steinhauer, L.C.; Zumdieck, J.
1978-01-01
One approach to the endloss problem in linear magnetic fusion (LMF) uses high magnetic field to reduce the required confinement time. This approach is limited by magnet stresses and bremsstrahlung heating of the first wall; the first-wall thermal-pulsing issue is addressed. Pertinent thermophysical parameters are developed in the context of high-field LMF to identify promising first-wall materials, and thermal fatigue experiments relevant to LMF first walls are reviewed. High-flux first-wall concepts are described which include both solid and evaporating first-wall configurations
Kim, Ki-Hyun; Choi, Young-Man; Gweon, Dae-Gab; Hong, Dong-Pyo; Kim, Koung-Suk; Lee, Suk-Won; Lee, Moon-Gu
2005-12-01
A decoupled dual servo (DDS) stage for ultra-precision scanning system is introduced in this paper. The proposed DDS consists of a 3 axis fine stage for handling and carrying workpieces and a XY coarse stage. Especially, the DDS uses three voice coil motors (VCM) as a planar actuation system of the fine stage to reduce the disturbances due to any mechanical connections with its coarse stage. VCMs are governed by Lorentz law. According to the law and its structure, there are no mechanical connections between coils and magnetic circuits. Moreover, the VCM doesn't have force ripples due to imperfections of commutation components of linear motor systems - currents and flux densities. However, due to the VCM's mechanical constraints the working range of the fine is about 5mm2. To break that hurdle, the coarse stage with linear motors is used for the fine stage to move about 200mm2. Because of the above reasons, the proposed DDS can achieve higher precision scanning than other stages with only one servo. Using MATLAB's Sequential Quadratic Programming (SQP), the VCMs are optimally designed for the highest force under conditions and constraints such as thermal dissipations due to its coil, its size, and so on. For linear motors, Halbach magnet linear motor is proposed and optimally designed in this paper. In addition, for their smooth movements without any frictions, guide systems of the DDS are composed of air bearings. And then, precisely to get their positions, linear scales with 0.1um resolution are used for the coarse's XY motions and plane mirror laser interferometers with 20nm for the fine's XYθz. On scanning, the two stages have same trajectories and are controlled. The control algorithm is Parallel method. The embodied ultra-precision scanning system has about 100nm tracking error and in-positioning stability.
Seismic monitoring of small alpine rockfalls – validity, precision and limitations
M. Dietze
2017-10-01
Full Text Available Rockfall in deglaciated mountain valleys is perhaps the most important post-glacial geomorphic process for determining the rates and patterns of valley wall erosion. Furthermore, rockfall poses a significant hazard to inhabitants and motivates monitoring efforts in populated areas. Traditional rockfall detection methods, such as aerial photography and terrestrial laser scanning (TLS data evaluation, provide constraints on the location and released volume of rock but have limitations due to significant time lags or integration times between surveys, and deliver limited information on rockfall triggering mechanisms and the dynamics of individual events. Environmental seismology, the study of seismic signals emitted by processes at the Earth's surface, provides a complementary solution to these shortcomings. However, this approach is predominantly limited by the strength of the signals emitted by a source and their transformation and attenuation towards receivers. To test the ability of seismic methods to identify and locate small rockfalls, and to characterise their dynamics, we surveyed a 2.16 km2 large, near-vertical cliff section of the Lauterbrunnen Valley in the Swiss Alps with a TLS device and six broadband seismometers. During 37 days in autumn 2014, 10 TLS-detected rockfalls with volumes ranging from 0.053 ± 0.004 to 2.338 ± 0.085 m3 were independently detected and located by the seismic approach, with a deviation of 81−29+59 m (about 7 % of the average inter-station distance of the seismometer network. Further potential rockfalls were detected outside the TLS-surveyed cliff area. The onset of individual events can be determined within a few milliseconds, and their dynamics can be resolved into distinct phases, such as detachment, free fall, intermittent impact, fragmentation, arrival at the talus slope and subsequent slope activity. The small rockfall volumes in this area require significant supervision during data
Trumper, David L.; Slocum, A. H.
1991-01-01
The authors constructed a high precision linear bearing. A 10.7 kg platen measuring 125 mm by 125 mm by 350 mm is suspended and controlled in five degrees of freedom by seven electromagnets. The position of the platen is measured by five capacitive probes which have nanometer resolution. The suspension acts as a linear bearing, allowing linear travel of 50 mm in the sixth degree of freedom. In the laboratory, this bearing system has demonstrated position stability of 5 nm peak-to-peak. This is believed to be the highest position stability yet demonstrated in a magnetic suspension system. Performance at this level confirms that magnetic suspensions can address motion control requirements at the nanometer level. The experimental effort associated with this linear bearing system is described. Major topics are the development of models for the suspension, implementation of control algorithms, and measurement of the actual bearing performance. Suggestions for the future improvement of the bearing system are given.
Lee, Moon G.; Gweon, Dae-Gab
2004-01-01
A comparative analysis is performed for linear motors adopting conventional and multi-segmented trapezoidal (MST) magnet arrays, respectively, for a high-precision positioning system. The proposed MST magnet array is a modified version of a Halbach magnet array. The MST array has trapezoidal magnets with variable shape and dimensions while the Halbach magnet array generally has a rectangular magnet with identical dimensions. We propose a new model that can describe the magnetic field resulting from the complex-shaped magnets. The model can be applied to both MST and conventional magnet arrays. Using the model, a design optimization of the two types of linear motors is performed and compared. The magnet array with trapezoidal magnets can produce more force than one with rectangular magnets when they are arrayed in a linear motor where there is a yoke with high permeability. After the optimization and comparison, we conclude that the linear motor with the MST magnet array can generate more actuating force per volume than the motor with the conventional array. In order to satisfy the requirements of next generation systems such as high resolution, high speed, and long stroke, the use of a linear motor with a MST array as an actuator in a high precision positioning system is recommended from the results obtained here
Study on the limiting acceleration rate in the VLEPP linear accelerator
Balakin, V.E.; Brezhnev, O.N.; Zakhvatkin, M.N.
1987-01-01
To realize the design of colliding linear electron-positron beams it is necessary to solve the radical problem of production of accelerating structure with acceleration rate of approximately 100 MeV/m which can accelerate 10 12 particles in a bunch. Results of experimental studies of the limiting acceleration rate in the VLEPP accelerating structure are presented. Accelerating sections of different length were tested. When testing sections 29 cm long the acceleration rate of 55 MeV/m was attained, and for 1 m section the value reached 40 MeV/m. The maximum rate of acceleration (90 MeV/m) was attained when electric field intensity on the structure surface constituted more than 150 MV/m
High Precision Linear And Circular Polarimetry. Sources With Stable Stokes Q,U & V In The Ghz Regime
Myserlis, Ioannis; Angelakis, E.; Zensus, J. A.
2017-10-01
We present a novel data analysis pipeline for the reconstruction of the linear and circular polarization parameters of radio sources. It includes several correction steps to minimize the effect of instrumental polarization, allowing the detection of linear and circular polarization degrees as low as 0.3 %. The instrumental linear polarization is corrected across the whole telescope beam and significant Stokes Q and U can be recovered even when the recorded signals are severely corrupted. The instrumental circular polarization is corrected with two independent techniques which yield consistent Stokes V results. The accuracy we reach is of the order of 0.1-0.2 % for the polarization degree and 1\\u00ba for the angle. We used it to recover the polarization of around 150 active galactic nuclei that were monitored monthly between 2010.6 and 2016.3 with the Effelsberg 100-m telescope. We identified sources with stable polarization parameters that can be used as polarization standards. Five sources have stable linear polarization; three are linearly unpolarized; eight have stable polarization angle; and 11 sources have stable circular polarization, four of which with non-zero Stokes V.
Guinn, V.P.; Nakazawa, L.; Leslie, J.
1984-01-01
The instrumental neutron activation analysis (INAA) Advance Prediction Computer Program (APCP) is extremely useful in guiding one to optimum subsequent experimental analyses of samples of all types of matrices. By taking into account the contributions to the cumulative Compton-continuum levels from all significant induced gamma-emitting radionuclides, it provides good INAA advance estimates of detectable photopeaks, measurement precisions, concentration lower limits of detection (LOD's) and optimum irradiation/decay/counting conditions - as well as of the very important maximum allowable sample size for each set of conditions calculated. The usefulness and importance of the four output parameters cited in the title are discussed using the INAA APCP outputs for NBS SRM-1632 Coal as the example
Shu, D.; Liu, W.; Kearney, S.; Anton, J.; Tischler, J. Z.
2015-09-01
The 3-D X-ray diffraction microscope is a new nondestructive tool for the three-dimensional characterization of mesoscopic materials structure. A flexural-pivot-based precision linear stage has been designed to perform a wire scan as a differential aperture for the 3-D diffraction microscope at the Advanced Photon Source, Argonne National Laboratory. The mechanical design and finite element analyses of the flexural stage, as well as its initial mechanical test results with laser interferometer are described in this paper.
Centralized motion control of a linear tooth belt drive: Analysis of the performance and limitations
Jokinen, M.
2010-07-01
A centralized robust position control for an electrical driven tooth belt drive is designed in this doctoral thesis. Both a cascaded control structure and a PID based position controller are discussed. The performance and the limitations of the system are analyzed and design principles for the mechanical structure and the control design are given. These design principles are also suitable for most of the motion control applications, where mechanical resonance frequencies and control loop delays are present. One of the major challenges in the design of a controller for machinery applications is that the values of the parameters in the system model (parameter uncertainty) or the system model it self (non-parametric uncertainty) are seldom known accurately in advance. In this thesis a systematic analysis of the parameter uncertainty of the linear tooth beltdrive model is presented and the effect of the variation of a single parameter on the performance of the total system is shown. The total variation of the model parameters is taken into account in the control design phase using a Quantitative Feedback Theory (QFT). The thesis also introduces a new method to analyze reference feedforward controllers applying the QFT. The performance of the designed controllers is verified by experimental measurements. The measurements confirm the control design principles that are given in this thesis. (orig.)
Michalicek, Gregor
2015-01-01
Density functional theory (DFT) is the most widely-used first-principles theory for analyzing, describing and predicting the properties of solids based on the fundamental laws of quantum mechanics. The success of the theory is a consequence of powerful approximations to the unknown exchange and correlation energy of the interacting electrons and of sophisticated electronic structure methods that enable the computation of the density functional equations on a computer. A widely used electronic structure method is the full-potential linearized augmented plane-wave (FLAPW) method, that is considered to be one of the most precise methods of its kind and often referred to as a standard. Challenged by the demand of treating chemically and structurally increasingly more complex solids, in this thesis this method is revisited and extended along two different directions: (i) precision and (ii) efficiency. In the full-potential linearized augmented plane-wave method the space of a solid is partitioned into nearly touching spheres, centered at each atom, and the remaining interstitial region between the spheres. The Kohn-Sham orbitals, which are used to construct the electron density, the essential quantity in DFT, are expanded into a linearized augmented plane-wave basis, which consists of plane waves in the interstitial region and angular momentum dependent radial functions in the spheres. In this thesis it is shown that for certain types of materials, e.g., materials with very broad electron bands or large band gaps, or materials that allow the usage of large space-filling spheres, the variational freedom of the basis in the spheres has to be extended in order to represent the Kohn-Sham orbitals with high precision over a large energy spread. Two kinds of additional radial functions confined to the spheres, so-called local orbitals, are evaluated and found to successfully eliminate this error. A new efficient basis set is developed, named linearized augmented lattice
Howard, Jonathon; Garzon-Coral, Carlos
2017-11-01
Tissues are shaped and patterned by mechanical and chemical processes. A key mechanical process is the positioning of the mitotic spindle, which determines the size and location of the daughter cells within the tissue. Recent force and position-fluctuation measurements indicate that pushing forces, mediated by the polymerization of astral microtubules against- the cell cortex, maintain the mitotic spindle at the cell center in Caenorhabditis elegans embryos. The magnitude of the centering forces suggests that the physical limit on the accuracy and precision of this centering mechanism is determined by the number of pushing microtubules rather than by thermally driven fluctuations. In cells that divide asymmetrically, anti-centering, pulling forces generated by cortically located dyneins, in conjunction with microtubule depolymerization, oppose the pushing forces to drive spindle displacements away from the center. Thus, a balance of centering pushing forces and anti-centering pulling forces localize the mitotic spindles within dividing C. elegans cells. © 2017 The Authors. BioEssays published by Wiley Periodicals, Inc.
Falvo, Cyril
2018-02-01
The theory of linear and non-linear infrared response of vibrational Holstein polarons in one-dimensional lattices is presented in order to identify the spectral signatures of self-trapping phenomena. Using a canonical transformation, the optical response is computed from the small polaron point of view which is valid in the anti-adiabatic limit. Two types of phonon baths are considered: optical phonons and acoustical phonons, and simple expressions are derived for the infrared response. It is shown that for the case of optical phonons, the linear response can directly probe the polaron density of states. The model is used to interpret the experimental spectrum of crystalline acetanilide in the C=O range. For the case of acoustical phonons, it is shown that two bound states can be observed in the two-dimensional infrared spectrum at low temperature. At high temperature, analysis of the time-dependence of the two-dimensional infrared spectrum indicates that bath mediated correlations slow down spectral diffusion. The model is used to interpret the experimental linear-spectroscopy of model α-helix and β-sheet polypeptides. This work shows that the Davydov Hamiltonian cannot explain the observations in the NH stretching range.
Suwada, Tsuyoshi; Satoh, Masanori; Telada, Souichi; Minoshima, Kaoru
2013-09-01
A laser-based alignment system with a He-Ne laser has been newly developed in order to precisely align accelerator units at the KEKB injector linac. The laser beam was first implemented as a 500-m-long fiducial straight line for alignment measurements. We experimentally investigated the propagation and stability characteristics of the laser beam passing through laser pipes in vacuum. The pointing stability at the last fiducial point was successfully obtained with the transverse displacements of ±40 μm level in one standard deviation by applying a feedback control. This pointing stability corresponds to an angle of ±0.08 μrad. This report contains a detailed description of the experimental investigation for the propagation and stability characteristics of the laser beam in the laser-based alignment system for long-distance linear accelerators.
Arce, Pedro; Lagares, Juan Ignacio
2018-02-01
We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2 × 2 cm2 to 40 × 40 cm2, a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.
Flutter and limit cycle oscillation suppression using linear and nonlinear tuned vibration absorbers
Verstraelen, Edouard; Kerschen, Gaëtan; Dimitriadis, Grigorios
2017-01-01
Aircraft are more than ever pushed to their limits for performance reasons. Consequently, they become increasingly nonlinear and they are more prone to undergo aeroelastic limit cycle oscillations. Structural nonlinearities affect aircraft such as the F-16, which can undergo store-induced limit cycle oscillations (LCOs). Furthermore, transonic buzz can lead to LCOs because of moving shock waves in transonic flight conditions on many aircraft. This study presents a numerical investigation o...
Solution of the spherically symmetric linear thermoviscoelastic problem in the inertia-free limit
Christensen, Tage Emil; Dyre, J. C.
2008-01-01
paper-the thermoviscoelastic problem may be solved analytically in the inertia-free limit, i.e., the limit where the sample is much smaller than the wavelength of sound waves at the frequencies of interest. As for the one-dimensional thermoviscoelastic problem [Christensen et al., Phys. Rev. E 75...
Fiorenza, Alberto; Vincenzi, Giovanni
2011-01-01
Research highlights: → We prove a result true for all linear homogeneous recurrences with constant coefficients. → As a corollary of our results we immediately get the celebrated Poincare' theorem. → The limit of the ratio of adjacent terms is characterized as the unique leading root of the characteristic polynomial. → The Golden Ratio, Kepler limit of the classical Fibonacci sequence, is the unique leading root. → The Kepler limit may differ from the unique root of maximum modulus and multiplicity. - Abstract: For complex linear homogeneous recursive sequences with constant coefficients we find a necessary and sufficient condition for the existence of the limit of the ratio of consecutive terms. The result can be applied even if the characteristic polynomial has not necessarily roots with modulus pairwise distinct, as in the celebrated Poincare's theorem. In case of existence, we characterize the limit as a particular root of the characteristic polynomial, which depends on the initial conditions and that is not necessarily the unique root with maximum modulus and multiplicity. The result extends to a quite general context the way used to find the Golden mean as limit of ratio of consecutive terms of the classical Fibonacci sequence.
An Efficient Implementation of Non-Linear Limit State Analysis Based on Lower-Bound Solutions
Damkilde, Lars; Schmidt, Lotte Juhl
2005-01-01
Limit State analysis has been used in design for decades e.g. the yield line theory for concrete slabs or slip line solutions in geotechnics. In engineering practice manual methods have been dominating but in recent years the interest in numerical methods has been increasing. In this respect it i...
Characterization of linear forms of the circular enterocin AS-48 obtained by limited proteolysis
Montalbán-López, Manuel; Spolaore, Barbara; Pinato, Odra; Martínez-Bueno, Manuel; Valdivia, Eva; Maqueda, Mercedes; Fontana, Angelo
2008-01-01
AS-48 is a 70-residue circular peptide from Enterococcus faecalis with a broad antibacterial activity. Here, we produced by limited proteolysis a protein species carrying a single nicking and fragments of 55 and 38 residues. Nicked AS-48 showed a lower helicity by far-ultraviolet circular dichroism
Kar, Soummya; Moura, José M. F.
2011-08-01
The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.
Improved Linear Algebra Methods for Redshift Computation from Limited Spectrum Data - II
Foster, Leslie; Waagen, Alex; Aijaz, Nabella; Hurley, Michael; Luis, Apolo; Rinsky, Joel; Satyavolu, Chandrika; Gazis, Paul; Srivastava, Ashok; Way, Michael
2008-01-01
Given photometric broadband measurements of a galaxy, Gaussian processes may be used with a training set to solve the regression problem of approximating the redshift of this galaxy. However, in practice solving the traditional Gaussian processes equation is too slow and requires too much memory. We employed several methods to avoid this difficulty using algebraic manipulation and low-rank approximation, and were able to quickly approximate the redshifts in our testing data within 17 percent of the known true values using limited computational resources. The accuracy of one method, the V Formulation, is comparable to the accuracy of the best methods currently used for this problem.
Shibamoto, Yuta; Otsuka, Shinya; Iwata, Hiromitsu; Sugie, Chikao; Ogino, Hiroyuki; Tomita, Natsuo
2012-01-01
Since the dose delivery pattern in high-precision radiotherapy is different from that in conventional radiation, radiobiological assessment of the physical dose used in stereotactic irradiation and intensity-modulated radiotherapy has become necessary. In these treatments, the daily dose is usually given intermittently over a time longer than that used in conventional radiotherapy. During prolonged radiation delivery, sublethal damage repair takes place, leading to the decreased effect of radiation. This phenomenon is almost universarily observed in vitro. In in vivo tumors, however, this decrease in effect can be counterbalanced by rapid reoxygenation, which has been demonstrated in a laboratory study. Studies on reoxygenation in human tumors are warranted to better evaluate the influence of prolonged radiation delivery. Another issue related to radiosurgery and hypofractionated stereotactic radiotherapy is the mathematical model for dose evaluation and conversion. Many clinicians use the linear-quadratic (LQ) model and biologically effective dose (BED) to estimate the effects of various radiation schedules, but it has been suggested that the LQ model is not applicable to high doses per fraction. Recent experimental studies verified the inadequacy of the LQ model in converting hypofractionated doses into single doses. The LQ model overestimates the effect of high fractional doses of radiation. BED is particularly incorrect when it is used for tumor responses in vivo, since it does not take reoxygenation into account. For normal tissue responses, improved models have been proposed, but, for in vivo tumor responses, the currently available models are not satisfactory, and better ones should be proposed in future studies. (author)
Berdyugin, A.; Piirola, V.; Sakanoi, T.; Kagitani, M.; Yoneda, M.
2018-03-01
Aim. To study the binary geometry of the classic Algol-type triple system λ Tau, we have searched for polarization variations over the orbital cycle of the inner semi-detached binary, arising from light scattering in the circumstellar material formed from ongoing mass transfer. Phase-locked polarization curves provide an independent estimate for the inclination i, orientation Ω, and the direction of the rotation for the inner orbit. Methods: Linear polarization measurements of λ Tau in the B, V , and R passbands with the high-precision Dipol-2 polarimeter have been carried out. The data have been obtained on the 60 cm KVA (Observatory Roque de los Muchachos, La Palma, Spain) and Tohoku 60 cm (Haleakala, Hawaii, USA) remotely controlled telescopes over 69 observing nights. Analytic and numerical modelling codes are used to interpret the data. Results: Optical polarimetry revealed small intrinsic polarization in λ Tau with 0.05% peak-to-peak variation over the orbital period of 3.95 d. The variability pattern is typical for binary systems showing strong second harmonic of the orbital period. We apply a standard analytical method and our own light scattering models to derive parameters of the inner binary orbit from the fit to the observed variability of the normalized Stokes parameters. From the analytical method, the average for three passband values of orbit inclination i = 76° + 1°/-2° and orientation Ω = 15°(195°) ± 2° are obtained. Scattering models give similar inclination values i = 72-76° and orbit orientation ranging from Ω = 16°(196°) to Ω = 19°(199°), depending on the geometry of the scattering cloud. The rotation of the inner system, as seen on the plane of the sky, is clockwise. We have found that with the scattering model the best fit is obtained for the scattering cloud located between the primary and the secondary, near the inner Lagrangian point or along the Roche lobe surface of the secondary facing the primary. The inclination i
Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D
2018-01-15
Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.
Verbiest, J. P. W.; Bailes, M.; van Straten, W.; Hobbs, G. B.; Edwards, R. T.; Manchester, R. N.; Bhat, N. D. R.; Sarkissian, J. M.; Jacoby, B. A.; Kulkarni, S. R.
2008-05-01
Analysis of 10 years of high-precision timing data on the millisecond pulsar PSR J0437-4715 has resulted in a model-independent kinematic distance based on an apparent orbital period derivative, dot Pb , determined at the 1.5% level of precision (Dk = 157.0 +/- 2.4 pc), making it one of the most accurate stellar distance estimates published to date. The discrepancy between this measurement and a previously published parallax distance estimate is attributed to errors in the DE200 solar system ephemerides. The precise measurement of dot Pb allows a limit on the variation of Newton's gravitational constant, |Ġ/G| <= 23 × 10-12 yr-1. We also constrain any anomalous acceleration along the line of sight to the pulsar to |a⊙/c| <= 1.5 × 10-18 s-1 at 95% confidence, and derive a pulsar mass, mpsr = 1.76 +/- 0.20 M⊙, one of the highest estimates so far obtained.
Mieussens, Luc
2013-01-01
The unified gas kinetic scheme (UGKS) of K. Xu et al. (2010) [37], originally developed for multiscale gas dynamics problems, is applied in this paper to a linear kinetic model of radiative transfer theory. While such problems exhibit purely diffusive behavior in the optically thick (or small Knudsen) regime, we prove that UGKS is still asymptotic preserving (AP) in this regime, but for the free transport regime as well. Moreover, this scheme is modified to include a time implicit discretization of the limit diffusion equation, and to correctly capture the solution in case of boundary layers. Contrary to many AP schemes, this method is based on a standard finite volume approach, it does neither use any decomposition of the solution, nor staggered grids. Several numerical tests demonstrate the properties of the scheme
Precision Oncology: Between Vaguely Right and Precisely Wrong.
Brock, Amy; Huang, Sui
2017-12-01
Precision Oncology seeks to identify and target the mutation that drives a tumor. Despite its straightforward rationale, concerns about its effectiveness are mounting. What is the biological explanation for the "imprecision?" First, Precision Oncology relies on indiscriminate sequencing of genomes in biopsies that barely represent the heterogeneous mix of tumor cells. Second, findings that defy the orthodoxy of oncogenic "driver mutations" are now accumulating: the ubiquitous presence of oncogenic mutations in silent premalignancies or the dynamic switching without mutations between various cell phenotypes that promote progression. Most troublesome is the observation that cancer cells that survive treatment still will have suffered cytotoxic stress and thereby enter a stem cell-like state, the seeds for recurrence. The benefit of "precision targeting" of mutations is inherently limited by this counterproductive effect. These findings confirm that there is no precise linear causal relationship between tumor genotype and phenotype, a reminder of logician Carveth Read's caution that being vaguely right may be preferable to being precisely wrong. An open-minded embrace of the latest inconvenient findings indicating nongenetic and "imprecise" phenotype dynamics of tumors as summarized in this review will be paramount if Precision Oncology is ultimately to lead to clinical benefits. Cancer Res; 77(23); 6473-9. ©2017 AACR . ©2017 American Association for Cancer Research.
Bauer, A.
2006-01-01
The standard model of elementary particle physics (SM) is perhaps the most significant theory in physics. It describes the interacting matter and gauge fields at high prescision. Nevertheless, there are a few requirements, which are not fulfilled by the SM, for example the incorporation of gravity, neutrino oscillations and further open questions. On the way to a more comprehensive theory, one can make use of an effective power series ansatz, which describes the SM physics as well as new phenomena. We exploit this ansatz to parameterize new effects with the help of a new mass scale and a set of new coupling constants. In the lowest order, one retrieves the SM. Higher order effects describe the new physics. Requiring certain properties under symmetry transformations gives a proper number of effective operators with mass dimension six. These operators are the starting point of our considerations. First, we calculate decay rates and cross sections, respectively, for selected processes under the assumption that only one new operator contributes at a time. Assuming that the observable's additional contribution is smaller than the experimental error, we give upper limits to the new coupling constant depending on the new mass scale. For this purpose we use leptonic and certain semileptonic precision data. On the one hand, the results presented in this thesis give physicists the opportunity to decide, which experiments are good candidates to increase precision. On the other hand, they show which experiment has the most promising potential for discoveries. (orig.)
Dornfeld, David
2008-01-01
Today there is a high demand for high-precision products. The manufacturing processes are now highly sophisticated and derive from a specialized genre called precision engineering. Precision Manufacturing provides an introduction to precision engineering and manufacturing with an emphasis on the design and performance of precision machines and machine tools, metrology, tooling elements, machine structures, sources of error, precision machining processes and precision process planning. As well as discussing the critical role precision machine design for manufacturing has had in technological developments over the last few hundred years. In addition, the influence of sustainable manufacturing requirements in precision processes is introduced. Drawing upon years of practical experience and using numerous examples and illustrative applications, David Dornfeld and Dae-Eun Lee cover precision manufacturing as it applies to: The importance of measurement and metrology in the context of Precision Manufacturing. Th...
Xu, H.; Kevrekidis, P. G.; Kapitula, T.
2017-06-01
In the present work, we consider a variety of two-component, one-dimensional states in nonlinear Schrödinger equations in the presence of a parabolic trap, inspired by the atomic physics context of Bose-Einstein condensates. The use of Lyapunov-Schmidt reduction methods allows us to identify persistence criteria for the different families of solutions which we classify as (m, n), in accordance with the number of zeros in each component. Upon developing the existence theory, we turn to a stability analysis of the different configurations, using the Krein signature and the Hamiltonian-Krein index as topological tools identifying the number of potentially unstable eigendirections for each branch. A perturbation expansion for the eigenvalue problems associated with nonlinear states found near the linear limit permits us to obtain explicit asymptotic expressions for the eigenvalues. Finally, when the states are found to be unstable, typically by virtue of Hamiltonian Hopf bifurcations, their dynamics is studied in order to identify the nature of the respective instability. The dynamics is generally found to lead to a vibrational evolution over long time scales.
ON-SKY DEMONSTRATION OF A LINEAR BAND-LIMITED MASK WITH APPLICATION TO VISUAL BINARY STARS
Crepp, J.; Ge, J.; Kravchenko, I.; Serabyn, E.; Carson, J.
2010-01-01
We have designed and built the first band-limited coronagraphic mask used for ground-based high-contrast imaging observations. The mask resides in the focal plane of the near-infrared camera PHARO at the Palomar Hale telescope and receives a well-corrected beam from an extreme adaptive optics system. Its performance on-sky with single stars is comparable to current state-of-the-art instruments: contrast levels of ∼10 -5 or better at 0.''8 in K s after post-processing, depending on how well non-common-path errors are calibrated. However, given the mask's linear geometry, we are able to conduct additional unique science observations. Since the mask does not suffer from pointing errors down its long axis, it can suppress the light from two different stars simultaneously, such as the individual components of a spatially resolved binary star system, and search for faint tertiary companions. In this paper, we present the design of the mask, the science motivation for targeting binary stars, and our preliminary results, including the detection of a candidate M-dwarf tertiary companion orbiting the visual binary star HIP 48337, which we are continuing to monitor with astrometry to determine its association.
Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori
2017-12-01
It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.
L. Malina
2017-08-01
Full Text Available Beam optics control is of critical importance for machine performance and protection. Nowadays, turn-by-turn (TbT beam position monitor (BPM data are increasingly exploited as they allow for fast and simultaneous measurement of various optics quantities. Nevertheless, so far the best documented uncertainty of measured β-functions is of about 10‰ rms. In this paper we compare the β-functions of the ESRF storage ring measured from two different TbT techniques—the N-BPM and the Amplitude methods—with the ones inferred from a measurement of the orbit response matrix (ORM. We show how to improve the precision of TbT techniques by refining the Fourier transform of TbT data with properly chosen excitation amplitude. The precision of the N-BPM method is further improved by refining the phase advance measurement. This represents a step forward compared to standard TbT measurements. First experimental results showing the precision of β-functions pushed down to 4‰ both in TbT and ORM techniques are reported and commented.
Li Yan-Chao; Wang Chun-Hui; Qu Yang; Gao Long; Cong Hai-Fang; Yang Yan-Ling; Gao Jie; Wang Ao-You
2011-01-01
This paper proposes a novel method of multi-beam laser heterodyne measurement for metal linear expansion coefficient. Based on the Doppler effect and heterodyne technology, the information is loaded of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by temperature variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain the value of linear expansion coefficient of metal by the calculation. This novel method is used to simulate measurement for linear expansion coefficient of metal rod under different temperatures by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.4%. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Li, Yan-Chao; Wang, Chun-Hui; Qu, Yang; Gao, Long; Cong, Hai-Fang; Yang, Yan-Ling; Gao, Jie; Wang, Ao-You
2011-01-01
This paper proposes a novel method of multi-beam laser heterodyne measurement for metal linear expansion coefficient. Based on the Doppler effect and heterodyne technology, the information is loaded of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by temperature variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain the value of linear expansion coefficient of metal by the calculation. This novel method is used to simulate measurement for linear expansion coefficient of metal rod under different temperatures by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.4%.
Patricia M. de Groot, MD
2018-04-01
Full Text Available Purpose: Precision radiation therapy such as stereotactic body radiation therapy and limited resection are being used more frequently to treat intrathoracic malignancies. Effective local control requires precise radiation target delineation or complete resection. Lung biopsy tracts (LBT on computed tomography (CT scans after the use of tract sealants can mimic malignant tract seeding (MTS and it is unclear whether these LBTs should be included in the calculated tumor volume or resected. This study evaluates the incidence, appearance, evolution, and malignant seeding of LBTs. Methods and materials: A total of 406 lung biopsies were performed in oncology patients using a tract sealant over 19 months. Of these patients, 326 had follow-up CT scans and were included in the study group. Four thoracic radiologists retrospectively analyzed the imaging, and a pathologist examined 10 resected LBTs. Results: A total of 234 of 326 biopsies (72%, including primary lung cancer [n = 98]; metastases [n = 81]; benign [n = 50]; and nondiagnostic [n = 5] showed an LBT on CT. LBTs were identified on imaging 0 to 3 months after biopsy. LBTs were typically straight or serpiginous with a thickness of 2 to 5 mm. Most LBTs were unchanged (92% or decreased (6.3% over time. An increase in LBT thickness/nodularity that was suspicious for MTS occurred in 4 of 234 biopsies (1.7%. MTS only occurred after biopsy of metastases from extrathoracic malignancies, and none occurred in patients with lung cancer. Conclusions: LBTs are common on CT after lung biopsy using a tract sealant. MTS is uncommon and only occurred in patients with extrathoracic malignancies. No MTS was found in patients with primary lung cancer. Accordingly, potential alteration in planned therapy should be considered only in patients with LBTs and extrathoracic malignancies being considered for stereotactic body radiation therapy or wedge resection.
The newest precision measurement
Lee, Jing Gu; Lee, Jong Dae
1974-05-01
This book introduces basic of precision measurement, measurement of length, limit gauge, measurement of angles, measurement of surface roughness, measurement of shapes and locations, measurement of outline, measurement of external and internal thread, gear testing, accuracy inspection of machine tools, three dimension coordinate measuring machine, digitalisation of precision measurement, automation of precision measurement, measurement of cutting tools, measurement using laser, and point of choosing length measuring instrument.
Zhang, Junzhi; Lv, Chen; Yue, Xiaowei; Li, Yutong; Yuan, Ye
2014-01-01
On/off solenoid valves with PWM control are widely used in all types of vehicle electro-hydraulic control systems respecting to their desirable properties of reliable, low cost and fast acting. However, it can hardly achieve a linear hydraulic modulation by using on/off valves mainly due to the nonlinear behaviors of valve dynamics and fluid, which affects the control accuracy significantly. In this paper, a linear relationship between limited pressure difference and coil current of an on/off valve in its critical closed state is proposed and illustrated, which has a great potential to be applied to improve hydraulic control performance. The hydraulic braking system of case study is modeled. The linear correspondence between limited pressure difference and coil current of the inlet valve is simulated and further verified experimentally. Based on validated simulation models, the impacts of key parameters are researched. The limited pressure difference affected by environmental temperatures is experimentally studied, and the amended linear relation is given according to the test data. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Bluemlein, Johannes
2012-05-15
Precision measurements together with exact theoretical calculations have led to steady progress in fundamental physics. A brief survey is given on recent developments and current achievements in the field of perturbative precision calculations in the Standard Model of the Elementary Particles and their application in current high energy collider data analyses.
Bluemlein, Johannes
2012-05-01
Precision measurements together with exact theoretical calculations have led to steady progress in fundamental physics. A brief survey is given on recent developments and current achievements in the field of perturbative precision calculations in the Standard Model of the Elementary Particles and their application in current high energy collider data analyses.
Shen, C.; Wu, C.; Gallet, S.; Cheng, H.; Edwards, R.; Hsieh, Y.; Lin, K.
2008-12-01
Contemporary multicollector-inductively coupled plasma mass spectrometry (MC-ICP-MS) with discrete dynode secondary electron multipliers (SEMs) can offer U-Th isotopic determinations with subpermil-permil- level precision in femtogram quantities. However, accurate isotopic measurement requires fully understanding SEM mass and intensity biases. In additional to dead-time effect, Richter et al (2001, Int. J. Mass Spectrom., 206, 105-127) reported a nonlinearity on SEMs produced by ETP and MasCom for count rates > 20 thousand counts per second (cps). We evaluated the possible biases for ion beams of 500- 1,600,000 cps on a latest MasCom SEM, SEV TE-Z/17, with more effective ion optical acceptance area (>50%) and better peak shape than previous models, used in a MC-ICP-MS, Thermo Fisher NEPTUNE. With the retarding potential quadruple lens (RPQ) turned off, ion beam intensity can be biased by only dead- time effect, which can be precisely corrected online or offline. With the RPQ on, two additional biases, an exponential-like increase of ion beam intensity from 100-100,000 s cps and an apparent dead-time effect (-2 to 2 ns) at high count rates, are observed. They are likely caused by the slightly defocused ions with a wide kinetic energy spread of ~5 eV, 10 times worse than that with thermal source, passing through the RPQ lens to the SEM, which is installed behind the focal plane. Fortunately, the two biases, which are stable during the daily measurements with the same settings of inlet system, source lenses, zoom optics, and RPQ, can be corrected effectively offline to earn accurate U-Th isotopic measurement.
Tiunov, V. V.
2018-02-01
The report provides results of the research related to the tubular linear induction motors’ application. The motors’ design features, a calculation model, a description of test specimens for mining and electric power industry are introduced. The most attention is given to the single-phase motors for high voltage switches drives with the usage of inexpensive standard single-phase transformers for motors’ power supply. The method of the motor’s parameters determination, when the motor is being fed from the transformer, working in the overload mode, was described, and the results of it practical usage were good enough for the engineering practice.
Kontorovich, V.M.; Kochanov, A.E.
1980-01-01
It is demonstrated that in the case of hard injection of relativistic electrons accompanied by the joint action of synchrotron (Compton) losses and energy-dependent spatial diffusion, a spectrum with 'breaks' is formed containing universal (with index γ = 2) and diffusion regions, both independent of the injection spectrum. The effect from non-linearity of the electron spectrum is considered in averaged electromagnetic spectra for various geometries of sources (sphere, disk, arm). It is shown that an universal region (with index α = 0.5) can occur in the radiation spectrum. (orig.)
Celina Franco Bragança Rosa Claudio
2008-01-01
O assunto básico da pesquisa, como o titulo selecionado, refere-se às estruturas lineares e às faixas necessárias para sua implantação como as linhas de infra-estrutura de rodovias e demais modais de transporte, bem como a infra-estrutura de energia realizada por meio de dutos e linhas de transmissão. As partes da pesquisa selecionadas para a Tese fazem referência aos módulos que serão desenvolvidos, a saber: Primeiro Módulo: Trata dos antecedentes do projeto de pesquisa e da natureza da tip...
M. De la Sen
2009-01-01
Full Text Available This paper investigates the relations between the particular eigensolutions of a limiting functional differential equation of any order, which is the nominal (unperturbed linear autonomous differential equations, and the associate ones of the corresponding perturbed functional differential equation. Both differential equations involve point and distributed delayed dynamics including Volterra class dynamics. The proofs are based on a Perron-type theorem for functional equations so that the comparison is governed by the real part of a dominant zero of the characteristic equation of the nominal differential equation. The obtained results are also applied to investigate the global stability of the perturbed equation based on that of its corresponding limiting equation.
Mathieu Omet
2014-07-01
Full Text Available We report the successful demonstration of an ILC-like high-gradient near-quench-limit operation at the Superconducting RF Test Facility at the High Energy Accelerator Research Organization (KEK in Japan. Preparation procedures necessary for the accelerator operation were conducted, such as rf phase calibration, beam-based gradient calibration, and automated beam compensation. Test runs were performed successfully for nominal operation, high-loaded Q (Q_{L} operation, and automated P_{k}Q_{L} operation. The results are described in terms of the achieved precision and stabilities of gradients and phases.
Precision digital control systems
Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.
This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.
Pan, Guangming; Wang, Shaochen; Zhou, Wang
2017-10-01
In this paper, we consider the asymptotic behavior of Xfn (n )≔∑i=1 nfn(xi ) , where xi,i =1 ,…,n form orthogonal polynomial ensembles and fn is a real-valued, bounded measurable function. Under the condition that Var Xfn (n )→∞ , the Berry-Esseen (BE) bound and Cramér type moderate deviation principle (MDP) for Xfn (n ) are obtained by using the method of cumulants. As two applications, we establish the BE bound and Cramér type MDP for linear spectrum statistics of Wigner matrix and sample covariance matrix in the complex cases. These results show that in the edge case (which means fn has a particular form f (x ) I (x ≥θn ) where θn is close to the right edge of equilibrium measure and f is a smooth function), Xfn (n ) behaves like the eigenvalues counting function of the corresponding Wigner matrix and sample covariance matrix, respectively.
Linearly Refined Session Types
Pedro Baltazar
2012-11-01
Full Text Available Session types capture precise protocol structure in concurrent programming, but do not specify properties of the exchanged values beyond their basic type. Refinement types are a form of dependent types that can address this limitation, combining types with logical formulae that may refer to program values and can constrain types using arbitrary predicates. We present a pi calculus with assume and assert operations, typed using a session discipline that incorporates refinement formulae written in a fragment of Multiplicative Linear Logic. Our original combination of session and refinement types, together with the well established benefits of linearity, allows very fine-grained specifications of communication protocols in which refinement formulae are treated as logical resources rather than persistent truths.
Maruthai Suresh
2010-10-01
Full Text Available A nonlinear process, the heat exchanger whose parameters vary with respect to the process variable, is considered. The time constant and gain of the chosen process vary as a function of temperature. The limitations of the conventional feedback controller tuned using Ziegler-Nichols settings for the chosen process are brought out. The servo and regulatory responses through simulation and experimentation for various magnitudes of set-point changes and load changes at various operating points with the controller tuned only at a chosen nominal operating point are obtained and analyzed. Regulatory responses for output load changes are studied. The efficiency of feedforward controller and the effects of modeling error have been brought out. An IMC based system is presented to understand clearly how variations of system parameters affect the performance of the controller. The present work illustrates the effectiveness of Feedforward and IMC controller.
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Lew, Matthew D; von Diezmann, Alexander R S; Moerner, W E
2013-02-25
Automated processing of double-helix (DH) microscope images of single molecules (SMs) streamlines the protocol required to obtain super-resolved three-dimensional (3D) reconstructions of ultrastructures in biological samples by single-molecule active control microscopy. Here, we present a suite of MATLAB subroutines, bundled with an easy-to-use graphical user interface (GUI), that facilitates 3D localization of single emitters (e.g. SMs, fluorescent beads, or quantum dots) with precisions of tens of nanometers in multi-frame movies acquired using a wide-field DH epifluorescence microscope. The algorithmic approach is based upon template matching for SM recognition and least-squares fitting for 3D position measurement, both of which are computationally expedient and precise. Overlapping images of SMs are ignored, and the precision of least-squares fitting is not as high as maximum likelihood-based methods. However, once calibrated, the algorithm can fit 15-30 molecules per second on a 3 GHz Intel Core 2 Duo workstation, thereby producing a 3D super-resolution reconstruction of 100,000 molecules over a 20×20×2 μm field of view (processing 128×128 pixels × 20000 frames) in 75 min.
Precision electron polarimetry
Chudakov, E.
2013-01-01
A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry
McKew, Boyd A; Metodieva, Gergana; Raines, Christine A; Metodiev, Metodi V; Geider, Richard J
2015-10-01
Limitation of marine primary production by the availability of nitrogen or phosphorus is common. Emiliania huxleyi, a ubiquitous phytoplankter that plays key roles in primary production, calcium carbonate precipitation and production of dimethyl sulfide, often blooms in mid-latitude at the beginning of summer when inorganic nutrient concentrations are low. To understand physiological mechanisms that allow such blooms, we examined how the proteome of E. huxleyi (strain 1516) responds to N and P limitation. We observed modest changes in much of the proteome despite large physiological changes (e.g. cellular biomass, C, N and P) associated with nutrient limitation of growth rate. Acclimation to nutrient limitation did however involve significant increases in the abundance of transporters for ammonium and nitrate under N limitation and for phosphate under P limitation. More notable were large increases in proteins involved in the acquisition of organic forms of N and P, including urea and amino acid/polyamine transporters and numerous C-N hydrolases under N limitation and a large upregulation of alkaline phosphatase under P limitation. This highly targeted reorganization of the proteome towards scavenging organic forms of macronutrients gives unique insight into the molecular mechanisms that underpin how E. huxleyi has found its niche to bloom in surface waters depleted of inorganic nutrients. © 2015 The Authors. Environmental Microbiology published by Society for Applied Microbiology and John Wiley & Sons Ltd.
Reedy, Robert P.; Crawford, Daniel W.
1984-01-01
A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.
Jones, Bernard J. T.
2017-04-01
Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.
Jadach, S.; Richter-Was, E.; Ward, B.F.L.; Was, Z.
1991-01-01
Starting from an earlier benchmark analytical calculation of the luminosity process e + e-→e + e-+(γ) at the SLAC Linear Collider (SLC) and the CERN e + e- collider LEP, we use the methods of Yennie, Frautschi, and Suura to develop an analytical improved naive exponentiated formula for this process. The formula is compared to our multiple-photon Monte Carlo event generator BHLUMI (1.13) for the same process. We find agreement on the overall cross-section normalization between the exponentiated formula and BHLUMI below the 0.2% level. In this way, we obtain an important cross-check on the normalization of our higher-order results in BHLUMI and we arrive at formulas which represent the LEP/SLC luminosity process in the below 1% Z 0 physics tests of the SU(2) L xU(1) theory in complete analogy with the famous high-precision Z 0 line-shape formulas for the e + e-→μ + μ - process discussed by Berends et al., for example
Precision Airdrop (Largage de precision)
2005-12-01
NAVIGATION TO A PRECISION AIRDROP OVERVIEW RTO-AG-300-V24 2 - 9 the point from various compass headings. As the tests are conducted, the resultant...rate. This approach avoids including a magnetic compass for the heading reference, which has difficulties due to local changes in the magnetic field...Scientifica della Difesa ROYAUME-UNI Via XX Settembre 123 Dstl Knowledge Services ESPAGNE 00187 Roma Information Centre, Building 247 SDG TECEN / DGAM
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of
Brooks, Emily K; Tett, Susan E; Isbel, Nicole M; McWhinney, Brett; Staatz, Christine E
2018-04-01
Although multiple linear regression-based limited sampling strategies (LSSs) have been published for enteric-coated mycophenolate sodium, none have been evaluated for the prediction of subsequent mycophenolic acid (MPA) exposure. This study aimed to examine the predictive performance of the published LSS for the estimation of future MPA area under the concentration-time curve from 0 to 12 hours (AUC0-12) in renal transplant recipients. Total MPA plasma concentrations were measured in 20 adult renal transplant patients on 2 occasions a week apart. All subjects received concomitant tacrolimus and were approximately 1 month after transplant. Samples were taken at 0, 0.33, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 6, and 8 hours and 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 3, 4, 6, 9, and 12 hours after dose on the first and second sampling occasion, respectively. Predicted MPA AUC0-12 was calculated using 19 published LSSs and data from the first or second sampling occasion for each patient and compared with the second occasion full MPA AUC0-12 calculated using the linear trapezoidal rule. Bias (median percentage prediction error) and imprecision (median absolute prediction error) were determined. Median percentage prediction error and median absolute prediction error for the prediction of full MPA AUC0-12 were multiple linear regression-based LSS was not possible without concentrations up to at least 8 hours after the dose.
Tetsu, Hiroyuki; Nakamoto, Taishi, E-mail: h.tetsu@geo.titech.ac.jp [Earth and Planetary Sciences, Tokyo Institute of Technology, Tokyo 152-8551 (Japan)
2016-03-15
Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme, we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.
Zhou, Jian; Li, Xi; Yang, Linlin; Yan, Songlin; Wang, Mengmeng; Cheng, Dan; Chen, Qi; Dong, Yulin; Liu, Peng; Cai, Weiquan; Zhang, Chaocan
2015-01-01
A novel electrochemical sensor based on Cu-MOF-199 [Cu-MOF-199 = Cu 3 (BTC) 2 (BTC = 1,3,5-benzenetricarboxylicacid)] and SWCNTs (single-walled carbon nanotubes) was fabricated for the simultaneous determination of hydroquinone (HQ) and catechol (CT). The modification procedure was carried out through casting SWCNTs on the bare glassy carbon electrode (GCE) and followed by the electrodeposition of Cu-MOF-199 on the SWCNTs modified electrode. Cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS) and scanning electron microscopy (SEM) were performed to characterize the electrochemical performance and surface characteristics of the as-prepared sensor. The composite electrode exhibited an excellent electrocatalytic activity with increased electrochemical signals towards the oxidation of HQ and CT, owing to the synergistic effect of SWCNTs and Cu-MOF-199. Under the optimized condition, the linear response range were from 0.1 to 1453 μmol L −1 (R HQ = 0.9999) for HQ and 0.1–1150 μmol L −1 (R CT = 0.9990) for CT. The detection limits for HQ and CT were as low as 0.08 and 0.1 μmol L −1 , respectively. Moreover, the modified electrode presented the good reproducibility and the excellent anti-interference performance. The analytical performance of the developed sensor for the simultaneous detection of HQ and CT had been evaluated in practical samples with satisfying results. - Highlights: • Cu-MOF-199/SWCNTs/GCE was facilely fabricated by the electrodeposition on SWCNTs/GCE. • An electrochemical sensor for detecting HQ and CT was constructed based on this modified electrode. • The proposed electrochemical sensor showed an extended linear range and lower detection limits. • The proposed electrochemical sensor had an excellent stability and reproducibility.
Wang, Cheng; Guan, Wei; Wang, J. Y.; Zhong, Bineng; Lai, Xiongming; Chen, Yewang; Xiang, Liang
2018-02-01
To adaptively identify the transient modal parameters for linear weakly damped structures with slow time-varying characteristics under unmeasured stationary random ambient loads, this paper proposes a novel operational modal analysis (OMA) method based on the frozen-in coefficient method and limited memory recursive principal component analysis (LMRPCA). In the modal coordinate, the random vibration response signals of mechanical weakly damped structures can be decomposed into the inner product of modal shapes and modal responses, from which the natural frequencies and damping ratios can be well acquired by single-degree-of-freedom (SDOF) identification approach such as FFT. Hence, for the OMA method based on principal component analysis (PCA), it becomes very crucial to examine the relation between the transformational matrix and the modal shapes matrix, to find the association between the principal components (PCs) matrix and the modal responses matrix, and to turn the operational modal parameter identification problem into PCA of the stationary random vibration response signals of weakly damped mechanical structures. Based on the theory of "time-freezing", the method of frozen-in coefficient, and the assumption of "short time invariant" and "quasistationary", the non-stationary random response signals of the weakly damped and slow linear time-varying structures (LTV) can approximately be seen as the stationary random response time series of weakly damped and linear time invariant structures (LTI) in a short interval. Thus, the adaptive identification of time-varying operational modal parameters is turned into decompositing the PCs of stationary random vibration response signals subsection of weakly damped mechanical structures after choosing an appropriate limited memory window. Finally, a three-degree-of-freedom (DOF) structure with weakly damped and slow time-varying mass is presented to illustrate this method of identification. Results show that the LMRPCA
Barlas, İbrahim Ömer; Sezgin, Orhan; Dandara, Collet; Türköz, Gözde; Yengel, Emre; Cindi, Zinhle; Ankaralı, Handan; Şardaş, Semra
2016-10-01
communities in Mersin and the Eastern Mediterranean region. This study can serve as a catalyst to invest in research in Syrian populations currently living in the Eastern Mediterranean. The findings have salience for rapid and rational regulatory decision-making for worldwide precision medicine and, specifically, "pharmacogenovigilance-guided bridging of pharmacokinetics" across world populations in the current era of planetary scale migration.
Forbrich, Jan [University of Vienna, Department of Astrophysics, Türkenschanzstr. 17, A-1180 Vienna (Austria); Dupuy, Trent J.; Rizzuto, Aaron; Mann, Andrew W.; Kraus, Adam L. [The University of Texas at Austin, Department of Astronomy, 2515 Speedway C1400, Austin, TX 78712 (United States); Reid, Mark J.; Berger, Edo [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Liu, Michael C.; Aller, Kimberly [Institute for Astronomy, University of Hawai’i, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States)
2016-08-10
We present multi-epoch astrometric radio observations with the Very Long Baseline Array (VLBA) of the young ultracool-dwarf binary LSPM J1314+1320AB. The radio emission comes from the secondary star. Combining the VLBA data with Keck near-infrared adaptive-optics observations of both components, a full astrometric fit of parallax (π {sub abs} = 57.975 ± 0.045 mas, corresponding to a distance of d = 17.249 ± 0.013 pc), proper motion (μ {sub α} {sub cos} {sub δ} = −247.99 ± 0.10 mas yr{sup −1}, μ {sub δ} = −183.58 ± 0.22 mas yr{sup −1}), and orbital motion is obtained. Despite the fact that the two components have nearly identical masses to within ±2%, the secondary’s radio emission exceeds that of the primary by a factor of ≳30, suggesting a difference in stellar rotation history, which could result in different magnetic field configurations. Alternatively, the emission could be anisotropic and beamed toward us for the secondary but not for the primary. Using only reflex motion, we exclude planets of mass 0.7–10 M {sub jup} with orbital periods of 600–10 days, respectively. Additionally, we use the full orbital solution of the binary to derive an upper limit for the semimajor axis of 0.23 au for stable planetary orbits within this system. These limits cover a parameter space that is inaccessible with, and complementary to, near-infrared radial velocity surveys of ultracool dwarfs. Our absolute astrometry will constitute an important test for the astrometric calibration of Gaia .
Precision synchrotron radiation detectors
Levi, M.; Rouse, F.; Butler, J.
1989-03-01
Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab
Experimental Approaches at Linear Colliders
Jaros, John A
2002-01-01
Precision measurements have played a vital role in our understanding of elementary particle physics. Experiments performed using e + e - collisions have contributed an essential part. Recently, the precision measurements at LEP and SLC have probed the standard model at the quantum level and severely constrained the mass of the Higgs boson [1]. Coupled with the limits on the Higgs mass from direct searches [2], this enables the mass to be constrained to be in the range 115-205 GeV. Developments in accelerator R and D have matured to the point where one could contemplate construction of a linear collider with initial energy in the 500 GeV range and a credible upgrade path to ∼ 1 TeV. Now is therefore the correct time to critically evaluate the case for such a facility
STANFORD (SLAC): Precision electroweak result
Anon.
1994-01-01
Precision testing of the electroweak sector of the Standard Model has intensified with the recent publication* of results from the SLD collaboration's 1993 run on the Stanford Linear Collider, SLC. Using a highly polarized electron beam colliding with an unpolarized positron beam, SLD physicists measured the left-right asymmetry at the Z boson resonance with dramatically improved accuracy over 1992
Bauer, A.
2006-09-25
The standard model of elementary particle physics (SM) is perhaps the most significant theory in physics. It describes the interacting matter and gauge fields at high prescision. Nevertheless, there are a few requirements, which are not fulfilled by the SM, for example the incorporation of gravity, neutrino oscillations and further open questions. On the way to a more comprehensive theory, one can make use of an effective power series ansatz, which describes the SM physics as well as new phenomena. We exploit this ansatz to parameterize new effects with the help of a new mass scale and a set of new coupling constants. In the lowest order, one retrieves the SM. Higher order effects describe the new physics. Requiring certain properties under symmetry transformations gives a proper number of effective operators with mass dimension six. These operators are the starting point of our considerations. First, we calculate decay rates and cross sections, respectively, for selected processes under the assumption that only one new operator contributes at a time. Assuming that the observable's additional contribution is smaller than the experimental error, we give upper limits to the new coupling constant depending on the new mass scale. For this purpose we use leptonic and certain semileptonic precision data. On the one hand, the results presented in this thesis give physicists the opportunity to decide, which experiments are good candidates to increase precision. On the other hand, they show which experiment has the most promising potential for discoveries. (orig.)
Bauer, A
2006-09-25
The standard model of elementary particle physics (SM) is perhaps the most significant theory in physics. It describes the interacting matter and gauge fields at high prescision. Nevertheless, there are a few requirements, which are not fulfilled by the SM, for example the incorporation of gravity, neutrino oscillations and further open questions. On the way to a more comprehensive theory, one can make use of an effective power series ansatz, which describes the SM physics as well as new phenomena. We exploit this ansatz to parameterize new effects with the help of a new mass scale and a set of new coupling constants. In the lowest order, one retrieves the SM. Higher order effects describe the new physics. Requiring certain properties under symmetry transformations gives a proper number of effective operators with mass dimension six. These operators are the starting point of our considerations. First, we calculate decay rates and cross sections, respectively, for selected processes under the assumption that only one new operator contributes at a time. Assuming that the observable's additional contribution is smaller than the experimental error, we give upper limits to the new coupling constant depending on the new mass scale. For this purpose we use leptonic and certain semileptonic precision data. On the one hand, the results presented in this thesis give physicists the opportunity to decide, which experiments are good candidates to increase precision. On the other hand, they show which experiment has the most promising potential for discoveries. (orig.)
EDITORIAL: Precision proteins Precision proteins
Demming, Anna
2010-06-01
large molecular weight, net negative charge and hydrophilicity of synthetic small interfering RNAs makes it hard for the molecules to cross the plasma membrane and enter the cell cytoplasm. Immune responses can also diminish the effectiveness of this approach. In this issue, Shiri Weinstein and Dan Peer from Tel Aviv University provide an overview of the challenges and recent progress in the use of nanocarriers for delivering RNAi effector molecules into target tissues and cells more effectively [5]. Also in this issue, researchers in Korea report new results that demonstrate the potential of nanostructures in neural network engineering [6]. Min Jee Jang et al report directional growth of neurites along linear carbon nanotube patterns, demonstrating great progress in neural engineering and the scope for using nanotechnology to treat neural diseases. Modern medicine cannot claim to have abolished the pain and suffering that accompany disease. But a comparison between the ghastly and often ineffective iron implements of early medicine and the smart gadgets and treatments used in hospitals today speaks volumes for the extraordinary progress that has been made, and the motivation behind this research. References [1] Wallis F 2000 Signs and senses: diagnosis and prognosis in early medieval pulse and urine texts Soc. Hist. Med. 13 265-78 [2] Arntz Y, Seelig J D, Lang H P, Zhang J, Hunziker P, Ramseyer J P, Meyer E, Hegner M and Gerber Ch 2003 Label-free protein assay based on a nanomechanical cantiliever array Nanotechnology 14 86-90 [3] Gowtham S, Scheicher R H, Pandey R, Karna S P and Ahuja R 2008 First-principles study of physisorption of nucleic acid bases on small-diameter carbon nanotubes Nanotechnology 19 125701 [4] Wang H-N and Vo-Dinh T 2009 Multiplex detection of breast cancer biomarkers using plasmonic molecular sentinel nanoprobes Nanotechnology 20 065101 [5] Weinstein S and Peer D 2010 RNAi nanomedicines: challenges and opportunities within the immune system
Surface characterization protocol for precision aspheric optics
Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra
2017-10-01
In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.
Precision measurements in supersymmetry
Feng, Johnathan Lee [Stanford Univ., CA (United States)
1995-05-01
Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.
Feedback Systems for Linear Colliders
1999-01-01
Feedback systems are essential for stable operation of a linear collider, providing a cost-effective method for relaxing tight tolerances. In the Stanford Linear Collider (SLC), feedback controls beam parameters such as trajectory, energy, and intensity throughout the accelerator. A novel dithering optimization system which adjusts final focus parameters to maximize luminosity contributed to achieving record performance in the 1997-98 run. Performance limitations of the steering feedback have been investigated, and improvements have been made. For the Next Linear Collider (NLC), extensive feedback systems are planned as an integral part of the design. Feedback requirements for JLC (the Japanese Linear Collider) are essentially identical to NLC; some of the TESLA requirements are similar but there are significant differences. For NLC, algorithms which incorporate improvements upon the SLC implementation are being prototyped. Specialized systems for the damping rings, rf and interaction point will operate at high bandwidth and fast response. To correct for the motion of individual bunches within a train, both feedforward and feedback systems are planned. SLC experience has shown that feedback systems are an invaluable operational tool for decoupling systems, allowing precision tuning, and providing pulse-to-pulse diagnostics. Feedback systems for the NLC will incorporate the key SLC features and the benefits of advancing technologies
I. K. Badalakha
2009-02-01
Full Text Available The article shows the result of solving the problem of stress-strain state of an elastic half-space because of the load action that uniformly distributed over the line, with the use of untraditional linear dependence of deformations on stressed state that is different from the generalized Hooke’s law.
Liu, Qingshan; Wang, Jun
2011-04-01
This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Gorringe, T. P.; Hertzog, D. W.
2015-09-01
The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.
Ye. V. Dmitriev
2007-01-01
Full Text Available Analysis of the Over-Voltage Limiter (OVL influence on electromagnetic high-frequency over-voltages at commutations with isolators of unloaded sections of wires and possibility of application of a frequency-dependent resistor in case of necessity to facilitate OVL operation conditions is provided in the paper.It is shown that it is necessary to take into account characteristics of OVL by IEEE circuit and its modifications at computer modeling of high-frequency over-voltages.
New methods for precision Moeller polarimetry*
Gaskell, D.; Meekins, D.G.; Yan, C.
2007-01-01
Precision electron beam polarimetry is becoming increasingly important as parity violation experiments attempt to probe the frontiers of the standard model. In the few GeV regime, Moeller polarimetry is well suited to high-precision measurements, however is generally limited to use at relatively low beam currents (<10 μA). We present a novel technique that will enable precision Moeller polarimetry at very large currents, up to 100 μA. (orig.)
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
Sasikala, V.; Sajan, D.; Joseph, Lynnette; Narayana, Badiadka; Sarojini, Balladka K.
2017-11-01
Two organic crystals of the isomeric forms of dichloroanilines such as 3, 4-dichloroaniline (3,4-DCA) and 3, 5-dichloroaniline (3,5-DCA) were grown by slow evaporation method and characterized by various analytical techniques. The vibrational normal modes of the samples were theoretically predicted using the scaled quantum mechanical force field procedures with the DFT level calculation and the potential energy distributions of the individual modes were estimated using the normal coordinate analysis. Fermi doublets and Evans holes were identified in the vibrational spectra of samples. The nuclear relaxation contribution to the vibrational polarizabilities and hyperpolarizabilities for the normal modes of the molecules were quantitatively estimated using the DFT method. The results of the calculated NLO responses showed that the vibrational mean contributions to the static polarizabilities and hyperpolarizabilities were smaller than the corresponding electronic contributions for the molecules. The Kurtz and Perry powder SHG efficiencies were measured and both samples have generated the second-harmonics of the fundamentals. The open-aperture Z-scan study results proposed the superior optical limiting property of 3,5-DCA with respect to 3,4-DCA.
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Practical precision measurement
Kwak, Ho Chan; Lee, Hui Jun
1999-01-01
This book introduces basic knowledge of precision measurement, measurement of length, precision measurement of minor diameter, measurement of angles, measurement of surface roughness, three dimensional measurement, measurement of locations and shapes, measurement of screw, gear testing, cutting tools testing, rolling bearing testing, and measurement of digitalisation. It covers height gauge, how to test surface roughness, measurement of plan and straightness, external and internal thread testing, gear tooth measurement, milling cutter, tab, rotation precision measurement, and optical transducer.
[Precision and personalized medicine].
Sipka, Sándor
2016-10-01
The author describes the concept of "personalized medicine" and the newly introduced "precision medicine". "Precision medicine" applies the terms of "phenotype", "endotype" and "biomarker" in order to characterize more precisely the various diseases. Using "biomarkers" the homogeneous type of a disease (a "phenotype") can be divided into subgroups called "endotypes" requiring different forms of treatment and financing. The good results of "precision medicine" have become especially apparent in relation with allergic and autoimmune diseases. The application of this new way of thinking is going to be necessary in Hungary, too, in the near future for participants, controllers and financing boards of healthcare. Orv. Hetil., 2016, 157(44), 1739-1741.
Precision Clock Evaluation Facility
Federal Laboratory Consortium — FUNCTION: Tests and evaluates high-precision atomic clocks for spacecraft, ground, and mobile applications. Supports performance evaluation, environmental testing,...
The minimal linear σ model for the Goldstone Higgs
Feruglio, F.; Gavela, M.B.; Kanshin, K.; Machado, P.A.N.; Rigolin, S.; Saa, S.
2016-01-01
In the context of the minimal SO(5) linear σ-model, a complete renormalizable Lagrangian -including gauge bosons and fermions- is considered, with the symmetry softly broken to SO(4). The scalar sector describes both the electroweak Higgs doublet and the singlet σ. Varying the σ mass would allow to sweep from the regime of perturbative ultraviolet completion to the non-linear one assumed in models in which the Higgs particle is a low-energy remnant of some strong dynamics. We analyze the phenomenological implications and constraints from precision observables and LHC data. Furthermore, we derive the d≤6 effective Lagrangian in the limit of heavy exotic fermions.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Precision mechatronics based on high-precision measuring and positioning systems and machines
Jäger, Gerd; Manske, Eberhard; Hausotte, Tino; Mastylo, Rostyslav; Dorozhovets, Natalja; Hofmann, Norbert
2007-06-01
Precision mechatronics is defined in the paper as the science and engineering of a new generation of high precision systems and machines. Nanomeasuring and nanopositioning engineering represents important fields of precision mechatronics. The nanometrology is described as the today's limit of the precision engineering. The problem, how to design nanopositioning machines with uncertainties as small as possible will be discussed. The integration of several optical and tactile nanoprobes makes the 3D-nanopositioning machine suitable for various tasks, such as long range scanning probe microscopy, mask and wafer inspection, nanotribology, nanoindentation, free form surface measurement as well as measurement of microoptics, precision molds, microgears, ring gauges and small holes.
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Precision machining commercialization
1978-01-01
To accelerate precision machining development so as to realize more of the potential savings within the next few years of known Department of Defense (DOD) part procurement, the Air Force Materials Laboratory (AFML) is sponsoring the Precision Machining Commercialization Project (PMC). PMC is part of the Tri-Service Precision Machine Tool Program of the DOD Manufacturing Technology Five-Year Plan. The technical resources supporting PMC are provided under sponsorship of the Department of Energy (DOE). The goal of PMC is to minimize precision machining development time and cost risk for interested vendors. PMC will do this by making available the high precision machining technology as developed in two DOE contractor facilities, the Lawrence Livermore Laboratory of the University of California and the Union Carbide Corporation, Nuclear Division, Y-12 Plant, at Oak Ridge, Tennessee
Lucatero, M.A.; Hernandez L, H.
2003-01-01
The linear heat generation rates (LHGR) for a BWR type generic fuel rod, as function of the burnup that violate the thermomechanical limit of circumferential plastic deformation of the can (canning) in nominal operation in stationary state of the fuel rod are calculated. The evaluation of the LHGR in function of the burnt of the fuel, is carried out under the condition that the deformation values of the circumferential plastic deformation of the can exceeds in 0.1 the thermomechanical value operation limit of 1%. The results of the calculations are compared with the generation rates of linear operation heat in function of the burnt for this fuel rod type. The calculations are carried out with the FEMAXI-V and RODBURN codes. The results show that for exhibitions or burnt between 0 and 16,000 M Wd/tU a minimum margin of 160.8 W/cm exists among LHGR (439.6 W/cm) operation peak for the given fuel and maximum LHGR of the fuel (calculated) to reach 1.1% of circumferential plastic deformation of the can, for the peak factor of power of 1.40. For burnt of 20,000 MWd/tU and 60,000 MWd/tU exist a margin of 150.3 and 298.6 W/cm, respectively. (Author)
[Progress in precision medicine: a scientific perspective].
Wang, B; Li, L M
2017-01-10
Precision medicine is a new strategy for disease prevention and treatment by taking into account differences in genetics, environment and lifestyles among individuals and making precise diseases classification and diagnosis, which can provide patients with personalized, targeted prevention and treatment. Large-scale population cohort studies are fundamental for precision medicine research, and could produce best evidence for precision medicine practices. Current criticisms on precision medicine mainly focus on the very small proportion of benefited patients, the neglect of social determinants for health, and the possible waste of limited medical resources. In spite of this, precision medicine is still a most hopeful research area, and would become a health care practice model in the future.
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Linear Controller Design: Limits of Performance
1991-01-01
which converges to khk as T So for large T and H stable so that kHkpk gn the signal yields kzk kwk near khk it is also...possible to show that there is a signal w such that kzk kwk khk 98 CHAPTER 5 NORMS OF SYSTEMS The peak gain of a system can also be expressed in terms...jHjj Sw and therefore kzk rms Z Sz d Z jHjjSw d sup jHjj Z Sw d kHkkwk rms
Microhartree precision in density functional theory calculations
Gulans, Andris; Kozhevnikov, Anton; Draxl, Claudia
2018-04-01
To address ultimate precision in density functional theory calculations we employ the full-potential linearized augmented plane-wave + local-orbital (LAPW + lo) method and justify its usage as a benchmark method. LAPW + lo and two completely unrelated numerical approaches, the multiresolution analysis (MRA) and the linear combination of atomic orbitals, yield total energies of atoms with mean deviations of 0.9 and 0.2 μ Ha , respectively. Spectacular agreement with the MRA is reached also for total and atomization energies of the G2-1 set consisting of 55 molecules. With the example of α iron we demonstrate the capability of LAPW + lo to reach μ Ha /atom precision also for periodic systems, which allows also for the distinction between the numerical precision and the accuracy of a given functional.
submitter LEP precision results
Kawamoto, T
2001-01-01
Precision measurements at LEP are reviewed, with main focus on the electroweak measurements and tests of the Standard Model. Constraints placed by the LEP measurements on possible new physics are also discussed.
Description of precision colorimeter
Campos Acosta, Joaquín; Pons Aglio, Alicia; Corróns, Antonio
1987-01-01
Describes the use of a fully automatic, computer-controlled absolute spectroradiometer as a precision colorimeter. The chromaticity coordinates of several types of light sources have been obtained with this measurement system.
This illustration represents the National Cancer Institute’s support of research to improve precision medicine in cancer treatment, in which unique therapies treat an individual’s cancer based on specific genetic abnormalities of that person’s tumor.
Environment-assisted precision measurement
Goldstein, G.; Cappellaro, P.; Maze, J. R.
2011-01-01
We describe a method to enhance the sensitivity of precision measurements that takes advantage of the environment of a quantum sensor to amplify the response of the sensor to weak external perturbations. An individual qubit is used to sense the dynamics of surrounding ancillary qubits, which...... are in turn affected by the external field to be measured. The resulting sensitivity enhancement is determined by the number of ancillas that are coupled strongly to the sensor qubit; it does not depend on the exact values of the coupling strengths and is resilient to many forms of decoherence. The method...... achieves nearly Heisenberg-limited precision measurement, using a novel class of entangled states. We discuss specific applications to improve clock sensitivity using trapped ions and magnetic sensing based on electronic spins in diamond...
Laser precision microfabrication
Sugioka, Koji; Pique, Alberto
2010-01-01
Miniaturization and high precision are rapidly becoming a requirement for many industrial processes and products. As a result, there is greater interest in the use of laser microfabrication technology to achieve these goals. This book composed of 16 chapters covers all the topics of laser precision processing from fundamental aspects to industrial applications to both inorganic and biological materials. It reviews the sate of the art of research and technological development in the area of laser processing.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
High precision spectrophotometric analysis of thorium
Palmieri, H.E.L.
1984-01-01
An accurate and precise determination of thorium is proposed. Precision of about 0,1% is required for the determination of macroquantities of thorium when processed. After an extensive literature search concerning this subject, spectrophotometric titration has been chosen, using dissodium ethylenediaminetetraacetate (EDTA) solution and alizarin-S as indicator. In order to obtain such a precision, an amount of 0,025 M EDTA solution precisely measured has been added and the titration was completed with less than 5 ml of 0,0025 M EDTA solution. It is usual to locate the end-point graphically, by plotting added titrant versus absorbance. The non-linear minimum square fit, using the Fletcher e Powell's minimization process and a computer programme. Besides the equivalence point, other parameters of titration were determined: the indicator concentration, the absorbance of the metal-indicator complex, and the stability constants of the metal-indicator and the metal-EDTA complexes. (Author) [pt
Thorium spectrophotometric analysis with high precision
Palmieri, H.E.L.
1983-06-01
An accurate and precise determination of thorium is proposed. Precision of about 0,1% is required for the determination of macroquantities of thorium processed. After an extensive literature search concerning this subject, spectrophotometric titration has been chosen, using disodium ethylenediaminetetraacetate (EDTA) solution and alizarin S as indicator. In order to obtain such a precision, an amount of 0,025 M EDTA solution precisely measured has been added and the titration was completed with less than 5 ml of 0,0025 M EDTA solution. It is usual to locate the end-point graphically, by plotting added titrant versus absorbance. The non-linear minimum square fit, using the Fletcher e Powell's minimization process and a computer program. (author)
Mechanics and Physics of Precise Vacuum Mechanisms
Deulin, E. A; Panfilov, Yu V; Nevshupa, R. A
2010-01-01
In this book the Russian expertise in the field of the design of precise vacuum mechanics is summarized. A wide range of physical applications of mechanism design in electronic, optical-electronic, chemical, and aerospace industries is presented in a comprehensible way. Topics treated include the method of microparticles flow regulation and its determination in vacuum equipment and mechanisms of electronics; precise mechanisms of nanoscale precision based on magnetic and electric rheology; precise harmonic rotary and not-coaxial nut-screw linear motion vacuum feedthroughs with technical parameters considered the best in the world; elastically deformed vacuum motion feedthroughs without friction couples usage; the computer system of vacuum mechanisms failure predicting. This English edition incorporates a number of features which should improve its usefulness as a textbook without changing the basic organization or the general philosophy of presentation of the subject matter of the original Russian work. Exper...
Precision pharmacology for Alzheimer's disease.
Hampel, Harald; Vergallo, Andrea; Aguilar, Lisi Flores; Benda, Norbert; Broich, Karl; Cuello, A Claudio; Cummings, Jeffrey; Dubois, Bruno; Federoff, Howard J; Fiandaca, Massimo; Genthon, Remy; Haberkamp, Marion; Karran, Eric; Mapstone, Mark; Perry, George; Schneider, Lon S; Welikovitch, Lindsay A; Woodcock, Janet; Baldacci, Filippo; Lista, Simone
2018-04-01
The complex multifactorial nature of polygenic Alzheimer's disease (AD) presents significant challenges for drug development. AD pathophysiology is progressing in a non-linear dynamic fashion across multiple systems levels - from molecules to organ systems - and through adaptation, to compensation, and decompensation to systems failure. Adaptation and compensation maintain homeostasis: a dynamic equilibrium resulting from the dynamic non-linear interaction between genome, epigenome, and environment. An individual vulnerability to stressors exists on the basis of individual triggers, drivers, and thresholds accounting for the initiation and failure of adaptive and compensatory responses. Consequently, the distinct pattern of AD pathophysiology in space and time must be investigated on the basis of the individual biological makeup. This requires the implementation of systems biology and neurophysiology to facilitate Precision Medicine (PM) and Precision Pharmacology (PP). The regulation of several processes at multiple levels of complexity from gene expression to cellular cycle to tissue repair and system-wide network activation has different time delays (temporal scale) according to the affected systems (spatial scale). The initial failure might originate and occur at every level potentially affecting the whole dynamic interrelated systems within an organism. Unraveling the spatial and temporal dynamics of non-linear pathophysiological mechanisms across the continuum of hierarchical self-organized systems levels and from systems homeostasis to systems failure is key to understand AD. Measuring and, possibly, controlling space- and time-scaled adaptive and compensatory responses occurring during AD will represent a crucial step to achieve the capacity to substantially modify the disease course and progression at the best suitable timepoints, thus counteracting disrupting critical pathophysiological inputs. This approach will provide the conceptual basis for effective
Design of precision position adjustable scoop
Li Zhili; Zhang Kai; Dong Jinping
2014-01-01
In isotopes separation technologies, the centrifuge method has been the most popular technology now. Separation performance of centrifugal machines is greatly influenced by the flow field in the centrifugal machines. And the position of scoops in the centrifuges has a significant influence on the flow field. To obtain a better flow field characteristic and find the best position of scoops in the centrifuges, a position adjustable scoop system was studied. A micro stage and a linear encoder were used in the system to improve the position accuracy of the scoop. Eddy current sensors had been used in a position calibration measurement. The measurement result showed the sensitivity and stability of the position system could meet the performance expectation. But as the driving mean, the steel wire and pulley limit the control precision. On the basis of this scheme, an ultrasonic motor was used as driving mean. Experimental results showed the control accuracy was improved. This scheme laid a foundation to obtain internal flow field parameters of centrifuge and get the optimal feeding tube position. (authors)
The International Linear Collider
List Benno
2014-04-01
Full Text Available The International Linear Collider (ILC is a proposed e+e− linear collider with a centre-of-mass energy of 200–500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.
The International Linear Collider
List, Benno
2014-04-01
The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.
de Boer, Wim
2015-01-01
The Large Electron Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while Supersymmetry provides an excellent candidate for dark matter. In addition, Supersymmetry removes the quadratic divergencies of the SM and {\\it predicts} the Hig...
Precision muonium spectroscopy
Jungmann, Klaus P.
2016-01-01
The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s–2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium–antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter. (author)
Steentoft, Catharina; Bennett, Eric P; Schjoldager, Katrine Ter-Borch Gram
2014-01-01
Precise and stable gene editing in mammalian cell lines has until recently been hampered by the lack of efficient targeting methods. While different gene silencing strategies have had tremendous impact on many biological fields, they have generally not been applied with wide success in the field...... of glycobiology, primarily due to their low efficiencies, with resultant failure to impose substantial phenotypic consequences upon the final glycosylation products. Here, we review novel nuclease-based precision genome editing techniques enabling efficient and stable gene editing, including gene disruption...... by introducing single or double-stranded breaks at a defined genomic sequence. We here compare and contrast the different techniques and summarize their current applications, highlighting cases from the field of glycobiology as well as pointing to future opportunities. The emerging potential of precision gene...
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
CERN. Geneva. Audiovisual Unit
2006-01-01
For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.
Winther, Johnni
Types in programming languages provide a powerful tool for the programmer to document the code so that a large aspect of the intent can not only be presented to fellow programmers but also be checked automatically by compilers. The precision with which types model the behavior of programs...... is crucial to the quality of these automated checks, and in this thesis we present three different improvements to the precision of types in three different aspects of the Java programming language. First we show how to extend the type system in Java with a new type which enables the detection of unintended...
Mixed-Precision Spectral Deferred Correction: Preprint
Grout, Ray W. S.
2015-09-02
Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.
Dobbs, David E.
2013-01-01
A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.
Hinchliffe, I.
1997-05-01
In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered
Precision Muonium Spectroscopy
Jungmann, Klaus P.
2016-01-01
The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 mu s. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In
König, Inke R; Fuchs, Oliver; Hansen, Gesine; von Mutius, Erika; Kopp, Matthias V
2017-10-01
The term "precision medicine" has become very popular over recent years, fuelled by scientific as well as political perspectives. Despite its popularity, its exact meaning, and how it is different from other popular terms such as "stratified medicine", "targeted therapy" or "deep phenotyping" remains unclear. Commonly applied definitions focus on the stratification of patients, sometimes referred to as a novel taxonomy, and this is derived using large-scale data including clinical, lifestyle, genetic and further biomarker information, thus going beyond the classical "signs-and-symptoms" approach.While these aspects are relevant, this description leaves open a number of questions. For example, when does precision medicine begin? In which way does the stratification of patients translate into better healthcare? And can precision medicine be viewed as the end-point of a novel stratification of patients, as implied, or is it rather a greater whole?To clarify this, the aim of this paper is to provide a more comprehensive definition that focuses on precision medicine as a process. It will be shown that this proposed framework incorporates the derivation of novel taxonomies and their role in healthcare as part of the cycle, but also covers related terms. Copyright ©ERS 2017.
Liszt, Harvey; Gerin, Maryvonne; Beasley, Anthony; Pety, Jerome
2018-04-01
We present Jansky Very Large Array observations of 20–37 GHz absorption lines from nearby Galactic diffuse molecular gas seen against four cosmologically distant compact radio continuum sources. The main new observational results are that l-C3H and CH3CN are ubiqitous in the local diffuse molecular interstellar medium at {\\text{}}{A}{{V}} ≲ 1, while HC3N was seen only toward B0415 at {\\text{}}{A}{{V}} > 4 mag. The linear/cyclic ratio is much larger in C3H than in C3H2 and the ratio CH3CN/HCN is enhanced compared to TMC-1, although not as much as toward the Horsehead Nebula. More consequentially, this work completes a long-term program assessing the abundances of small hydrocarbons (CH, C2H, linear and cyclic C3H and C3 {{{H}}}2, and C4H and C4H‑) and the CN-bearing species (CN, HCN, HNC, HC3N, HC5N, and CH3CN): their systematics in diffuse molecular gas are presented in detail here. We also observed but did not strongly constrain the abundances of a few oxygen-bearing species, most prominently HNCO. We set limits on the column density of CH2CN, such that the anion CH2CN‑ is only viable as a carrier of diffuse interstellar bands if the N(CH2CN)/N(CH2CN‑) abundance ratio is much smaller in this species than in any others for which the anion has been observed. We argue that complex organic molecules (COMS) are not present in clouds meeting a reasonable definition of diffuse molecular gas, i.e., {\\text{}}{A}{{V}} ≲ 1 mag. Based on observations obtained with the NRAO Jansky Very Large Array (VLA).
Linear and non-linear simulation of joints contact surface using ...
The joint modelling including non-linear effects needs accurate and precise study of their behaviors. When joints are under the dynamic loading, micro, macro- slip happens in contact surface which is non-linear reason of the joint contact surface. The non-linear effects of joint contact surface on total behavior of structure are ...
Interacting dark sector and precision cosmology
Buen-Abad, Manuel A.; Schmaltz, Martin; Lesgourgues, Julien; Brinckmann, Thejs
2018-01-01
We consider a recently proposed model in which dark matter interacts with a thermal background of dark radiation. Dark radiation consists of relativistic degrees of freedom which allow larger values of the expansion rate of the universe today to be consistent with CMB data (H0-problem). Scattering between dark matter and radiation suppresses the matter power spectrum at small scales and can explain the apparent discrepancies between ΛCDM predictions of the matter power spectrum and direct measurements of Large Scale Structure LSS (σ8-problem). We go beyond previous work in two ways: 1. we enlarge the parameter space of our previous model and allow for an arbitrary fraction of the dark matter to be interacting and 2. we update the data sets used in our fits, most importantly we include LSS data with full k-dependence to explore the sensitivity of current data to the shape of the matter power spectrum. We find that LSS data prefer models with overall suppressed matter clustering due to dark matter - dark radiation interactions over ΛCDM at 3–4 σ. However recent weak lensing measurements of the power spectrum are not yet precise enough to clearly distinguish two limits of the model with different predicted shapes for the linear matter power spectrum. In two appendices we give a derivation of the coupled dark matter and dark radiation perturbation equations from the Boltzmann equation in order to clarify a confusion in the recent literature, and we derive analytic approximations to the solutions of the perturbation equations in the two physically interesting limits of all dark matter weakly interacting or a small fraction of dark matter strongly interacting.
Ondo Meye, P; Schandorf, C; Amoako, J K; Manteaw, P O; Amoatey, E A; Adjei, D N
2017-12-01
An inter-comparison study was conducted to assess the capability of dosimetry systems of individual monitoring services (IMSs) in Gabon and Ghana to measure personal dose equivalent Hp(10) in photon fields. The performance indicators assessed were the lower limit of detection, linearity and uncertainty in measurement. Monthly and quarterly recording levels were proposed with corresponding values of 0.08 and 0.025 mSv, and 0.05 and 0.15 mSv for the TLD and OSL systems, respectively. The linearity dependence of the dosimetry systems was performed following the requirement given in the Standard IEC 62387 of the International Electrotechnical Commission (IEC). The results obtained for the two systems were satisfactory. The procedure followed for the uncertainty assessment is the one given in the IEC technical report TR62461. The maximum relative overall uncertainties, in absolute value, expressed in terms of Hp(10), for the TL dosimetry system Harshaw 6600, are 44. 35% for true doses below 0.40 mSv and 36.33% for true doses ≥0.40 mSv. For the OSL dosimetry system microStar, the maximum relative overall uncertainties, in absolute value, are 52.17% for true doses below 0.40 mSv and 37.43% for true doses ≥0.40 mSv. These results are in good agreement with the requirements for accuracy of the International Commission on Radiological protection. When expressing the uncertainties in terms of response, comparison with the IAEA requirements for overall accuracy showed that the uncertainty results were also acceptable. The values of Hp(10) directly measured by the two dosimetry systems showed a significant underestimation for the Harshaw 6600 system, and a slight overestimation for the microStar system. After correction for linearity of the measured doses, the two dosimetry systems gave better and comparable results. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lucatero, M.A.; Hernandez L, H. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)]. e-mail: mal@nuclear.inin.mx
2003-07-01
The linear heat generation rates (LHGR) for a BWR type generic fuel rod, as function of the burnup that violate the thermomechanical limit of circumferential plastic deformation of the can (canning) in nominal operation in stationary state of the fuel rod are calculated. The evaluation of the LHGR in function of the burnt of the fuel, is carried out under the condition that the deformation values of the circumferential plastic deformation of the can exceeds in 0.1 the thermomechanical value operation limit of 1%. The results of the calculations are compared with the generation rates of linear operation heat in function of the burnt for this fuel rod type. The calculations are carried out with the FEMAXI-V and RODBURN codes. The results show that for exhibitions or burnt between 0 and 16,000 M Wd/tU a minimum margin of 160.8 W/cm exists among LHGR (439.6 W/cm) operation peak for the given fuel and maximum LHGR of the fuel (calculated) to reach 1.1% of circumferential plastic deformation of the can, for the peak factor of power of 1.40. For burnt of 20,000 MWd/tU and 60,000 MWd/tU exist a margin of 150.3 and 298.6 W/cm, respectively. (Author)
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
CERN. Geneva
2006-01-01
For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.
High-precision ground-based photometry of exoplanets
de Mooij Ernst J.W.
2013-04-01
Full Text Available High-precision photometry of transiting exoplanet systems has contributed significantly to our understanding of the properties of their atmospheres. The best targets are the bright exoplanet systems, for which the high number of photons allow very high signal-to-noise ratios. Most of the current instruments are not optimised for these high-precision measurements, either they have a large read-out overhead to reduce the readnoise and/or their field-of-view is limited, preventing simultaneous observations of both the target and a reference star. Recently we have proposed a new wide-field imager for the Observatoir de Mont-Megantic optimised for these bright systems (PI: Jayawardhana. The instruments has a dual beam design and a field-of-view of 17' by 17'. The cameras have a read-out time of 2 seconds, significantly reducing read-out overheads. Over the past years we have obtained significant experience with how to reach the high precision required for the characterisation of exoplanet atmospheres. Based on our experience we provide the following advice: Get the best calibrations possible. In the case of bad weather, characterise the instrument (e.g. non-linearity, dome flats, bias level, this is vital for better understanding of the science data. Observe the target for as long as possible, the out-of-transit baseline is as important as the transit/eclipse itself. A short baseline can lead to improperly corrected systematic and mis-estimation of the red-noise. Keep everything (e.g. position on detector, exposure time as stable as possible. Take care that the defocus is not too strong. For a large defocus, the contribution of the total flux from the sky-background in the aperture could well exceed that of the target, resulting in very strict requirements on the precision at which the background is measured.
Quad precision delay generator
Krishnan, Shanti; Gopalakrishnan, K.R.; Marballi, K.R.
1997-01-01
A Quad Precision Delay Generator delays a digital edge by a programmed amount of time, varying from nanoseconds to microseconds. The output of this generator has an amplitude of the order of tens of volts and rise time of the order of nanoseconds. This was specifically designed and developed to meet the stringent requirements of the plasma focus experiments. Plasma focus is a laboratory device for producing and studying nuclear fusion reactions in hot deuterium plasma. 3 figs
Tanwiwat Jaikuna
2017-02-01
Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Precision experiments in electroweak interactions
Swartz, M.L.
1990-03-01
The electroweak theory of Glashow, Weinberg, and Salam (GWS) has become one of the twin pillars upon which our understanding of all particle physics phenomena rests. It is a brilliant achievement that qualitatively and quantitatively describes all of the vast quantity of experimental data that have been accumulated over some forty years. Note that the word quantitatively must be qualified. The low energy limiting cases of the GWS theory, Quantum Electrodynamics and the V-A Theory of Weak Interactions, have withstood rigorous testing. The high energy synthesis of these ideas, the GWS theory, has not yet been subjected to comparably precise scrutiny. The recent operation of a new generation of proton-antiproton (p bar p) and electron-positron (e + e - ) colliders has made it possible to produce and study large samples of the electroweak gauge bosons W ± and Z 0 . We expect that these facilities will enable very precise tests of the GWS theory to be performed in the near future. In keeping with the theme of this Institute, Physics at the 100 GeV Mass Scale, these lectures will explore the current status and the near-future prospects of these experiments
M. ZANGIABADI; H. R. MALEKI
2007-01-01
In the real-world optimization problems, coefficients of the objective function are not known precisely and can be interpreted as fuzzy numbers. In this paper we define the concepts of optimality for linear programming problems with fuzzy parameters based on those for multiobjective linear programming problems. Then by using the concept of comparison of fuzzy numbers, we transform a linear programming problem with fuzzy parameters to a multiobjective linear programming problem. To this end, w...
Precision electroweak measurements
Demarteau, M.
1996-11-01
Recent electroweak precision measurements fro e + e - and p anti p colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct m t measurements. Using the world's electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs' mass are discussed
Monteil, St.
2009-12-01
This document aims at summarizing a dozen of years of the author's research in High Energy Physics, in particular dealing with precision tests of the electroweak theory. Parity violating asymmetries measurements at LEP with the ALEPH detector together with global consistency checks of the Kobayashi-Maskawa paradigm within the CKM-fitter group are gathered in the first part of the document. The second part deals with the unpublished instrumental work about the design, tests, productions and commissioning of the elements of the Pre-Shower detector of the LHCb spectrometer at LHC. Physics perspectives with LHCb are eventually discussed as a conclusion. (author)
Wardle, F
2015-01-01
Ultra-precision bearings can achieve extreme accuracy of rotation, making them ideal for use in numerous applications across a variety of fields, including hard disk drives, roundness measuring machines and optical scanners. Ultraprecision Bearings provides a detailed review of the different types of bearing and their properties, as well as an analysis of the factors that influence motion error, stiffness and damping. Following an introduction to basic principles of motion error, each chapter of the book is then devoted to the basic principles and properties of a specific type of bearin
Precision and Accuracy in PDV and VISAR
Ambrose, W. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-08-22
This is a technical report discussing our current level of understanding of a wide and varying distribution of uncertainties in velocity results from Photonic Doppler Velocimetry in its application to gas gun experiments. Using propagation of errors methods with statistical averaging of photon number fluctuation in the detected photocurrent and subsequent addition of electronic recording noise, we learn that the velocity uncertainty in VISAR can be written in closed form. For PDV, the non-linear frequency transform and peak fitting methods employed make propagation of errors estimates notoriously more difficult to write down in closed form expect in the limit of constant velocity and low time resolution (large analysis-window width). An alternative method of error propagation in PDV is to use Monte Carlo methods with a simulation of the time domain signal based on results from the spectral domain. A key problem for Monte Carlo estimation for an experiment is a correct estimate of that portion of the time-domain noise associated with the peak-fitting region-of-interesting in the spectral domain. Using short-time Fourier transformation spectral analysis and working with the phase dependent real and imaginary parts allows removal of amplitude-noise cross terms that invariably show up when working with correlation-based methods or FFT power spectra. Estimation of the noise associated with a given spectral region of interest is then possible. At this level of progress, we learn that Monte Carlo trials with random recording noise and initial (uncontrolled) phase yields velocity uncertainties that are not as large as those observed. In a search for additional noise sources, a speckleinterference modulation contribution with off axis rays was investigated, and was found to add a velocity variation beyond that from the recording noise (due to random interference between off axis rays), but in our experiments the speckle modulation precision was not as important as the
Tang, T. F.; Chong, S. H.
2017-06-01
This paper presents a practical controller design method for ultra-precision positioning of pneumatic artificial muscle actuator stages. Pneumatic artificial muscle (PAM) actuators are safe to use and have numerous advantages which have brought these actuators to wide applications. However, PAM exhibits strong non-linear characteristics, and these limitations lead to low controllability and limit its application. In practice, the non-linear characteristics of PAM mechanism are difficult to be precisely modeled, and time consuming to model them accurately. The purpose of the present study is to clarify a practical controller design method that emphasizes a simple design procedure that does not acquire plants parameters modeling, and yet is able to demonstrate ultra-precision positioning performance for a PAM driven stage. The practical control approach adopts continuous motion nominal characteristic trajectory following (CM NCTF) control as the feedback controller. The constructed PAM driven stage is in low damping characteristic and causes severe residual vibration that deteriorates motion accuracy of the system. Therefore, the idea to increase the damping characteristic by having an acceleration feedback compensation to the plant has been proposed. The effectiveness of the proposed controller was verified experimentally and compared with a classical PI controller in point-to-point motion. The experiment results proved that the CM NCTF controller demonstrates better positioning performance in smaller motion error than the PI controller. Overall, the CM NCTF controller has successfully to reduce motion error to 3µm, which is 88.7% smaller than the PI controller.
Classical and sequential limit analysis revisited
Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi
2018-04-01
Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.
Stepping motor adaptor actuator for a commercial uhv linear motion feedthrough
Iarocci, M.; Oversluizen, T.
1989-01-01
An adaptor coupling has been developed that will allow the attachment of a standard stepping motor to a precision commercial (Varian) uhv linear motion feedthrough. The assembly, consisting of the motor, motor adaptor, limit switches, etc. is clamped to the feedthrough body which can be done under vacuum conditions if necessary. With a 500 step/rev. stepping motor the resolution is 1.27 μm per step. We presently use this assembly in a remote location for the precise positioning of a beam sensing monitor. 2 refs., 3 figs
Relativistic Linear Restoring Force
Clark, D.; Franklin, J.; Mann, N.
2012-01-01
We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…
Miniature linear cooler development
Pruitt, G.R.
1993-01-01
An overview is presented of the status of a family of miniature linear coolers currently under development by Hughes Aircraft Co. for use in hand held, volume limited or power limited infrared applications. These coolers, representing the latest additions to the Hughes family of TOP trademark [twin-opposed piston] linear coolers, have been fabricated and tested in three different configurations. Each configuration is designed to utilize a common compressor assembly resulting in reduced manufacturing costs. The baseline compressor has been integrated with two different expander configurations and has been operated with two different levels of input power. These various configuration combinations offer a wide range of performance and interface characteristics which may be tailored to applications requiring limited power and size without significantly compromising cooler capacity or cooldown characteristics. Key cooler characteristics and test data are summarized for three combinations of cooler configurations which are representative of the versatility of this linear cooler design. Configurations reviewed include the shortened coldfinger [1.50 to 1.75 inches long], limited input power [less than 17 Watts] for low power availability applications; the shortened coldfinger with higher input power for lightweight, higher performance applications; and coldfingers compatible with DoD 0.4 Watt Common Module coolers for wider range retrofit capability. Typical weight of these miniature linear coolers is less than 500 grams for the compressor, expander and interconnecting transfer line. Cooling capacity at 80K at room ambient conditions ranges from 400 mW to greater than 550 mW. Steady state power requirements for maintaining a heat load of 150 mW at 80K has been shown to be less than 8 Watts. Ongoing reliability growth testing is summarized including a review of the latest test article results
Precision lifetime measurements
Tanner, C.E.
1994-01-01
Precision measurements of atomic lifetimes provide important information necessary for testing atomic theory. The authors employ resonant laser excitation of a fast atomic beam to measure excited state lifetimes by observing the decay-in-flight of the emitted fluorescence. A similar technique was used by Gaupp, et al., who reported measurements with precisions of less than 0.2%. Their program includes lifetime measurements of the low lying p states in alkali and alkali like systems. Motivation for this work comes from a need to test the atomic many-body-perturbation theory (MBPT) that is necessary for interpretation of parity nonconservation experiments in atomic cesium. The authors have measured the cesium 6p 2 P 1/2 and 6p 2 P 3/2 state lifetimes to be 34.934±0.094 ns and 30.499±0.070 ns respectively. With minor changes to the apparatus, they have extended their measurements to include the lithium 2p 2 P 1/2 and 2p 2 P 3/2 states
Fundamentals of precision medicine
Divaris, Kimon
2018-01-01
Imagine a world where clinicians make accurate diagnoses and provide targeted therapies to their patients according to well-defined, biologically-informed disease subtypes, accounting for individual differences in genetic make-up, behaviors, cultures, lifestyles and the environment. This is not as utopic as it may seem. Relatively recent advances in science and technology have led to an explosion of new information on what underlies health and what constitutes disease. These novel insights emanate from studies of the human genome and microbiome, their associated transcriptomes, proteomes and metabolomes, as well as epigenomics and exposomics—such ‘omics data can now be generated at unprecedented depth and scale, and at rapidly decreasing cost. Making sense and integrating these fundamental information domains to transform health care and improve health remains a challenge—an ambitious, laudable and high-yield goal. Precision dentistry is no longer a distant vision; it is becoming part of the rapidly evolving present. Insights from studies of the human genome and microbiome, their associated transcriptomes, proteomes and metabolomes, and epigenomics and exposomics have reached an unprecedented depth and scale. Much more needs to be done, however, for the realization of precision medicine in the oral health domain. PMID:29227115
Powell, J. W.; Westphal, D. A.
1991-08-01
A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10-12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of U.S. industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG&G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.
Video-rate or high-precision: a flexible range imaging camera
Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.
2008-02-01
A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.
Principles of precision medicine in stroke.
Hinman, Jason D; Rost, Natalia S; Leung, Thomas W; Montaner, Joan; Muir, Keith W; Brown, Scott; Arenillas, Juan F; Feldmann, Edward; Liebeskind, David S
2017-01-01
The era of precision medicine has arrived and conveys tremendous potential, particularly for stroke neurology. The diagnosis of stroke, its underlying aetiology, theranostic strategies, recurrence risk and path to recovery are populated by a series of highly individualised questions. Moreover, the phenotypic complexity of a clinical diagnosis of stroke makes a simple genetic risk assessment only partially informative on an individual basis. The guiding principles of precision medicine in stroke underscore the need to identify, value, organise and analyse the multitude of variables obtained from each individual to generate a precise approach to optimise cerebrovascular health. Existing data may be leveraged with novel technologies, informatics and practical clinical paradigms to apply these principles in stroke and realise the promise of precision medicine. Importantly, precision medicine in stroke will only be realised once efforts to collect, value and synthesise the wealth of data collected in clinical trials and routine care starts. Stroke theranostics, the ultimate vision of synchronising tailored therapeutic strategies based on specific diagnostic data, demand cerebrovascular expertise on big data approaches to clinically relevant paradigms. This review considers such challenges and delineates the principles on a roadmap for rational application of precision medicine to stroke and cerebrovascular health. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
The economic case for precision medicine.
Gavan, Sean P; Thompson, Alexander J; Payne, Katherine
2018-01-01
Introduction : The advancement of precision medicine into routine clinical practice has been highlighted as an agenda for national and international health care policy. A principle barrier to this advancement is in meeting requirements of the payer or reimbursement agency for health care. This special report aims to explain the economic case for precision medicine, by accounting for the explicit objectives defined by decision-makers responsible for the allocation of limited health care resources. Areas covered : The framework of cost-effectiveness analysis, a method of economic evaluation, is used to describe how precision medicine can, in theory, exploit identifiable patient-level heterogeneity to improve population health outcomes and the relative cost-effectiveness of health care. Four case studies are used to illustrate potential challenges when demonstrating the economic case for a precision medicine in practice. Expert commentary : The economic case for a precision medicine should be considered at an early stage during its research and development phase. Clinical and economic evidence can be generated iteratively and should be in alignment with the objectives and requirements of decision-makers. Programmes of further research, to demonstrate the economic case of a precision medicine, can be prioritized by the extent that they reduce the uncertainty expressed by decision-makers.
Ge, Li; Zhao, Nan
2018-04-01
We study the coherence dynamics of a qubit coupled to a harmonic oscillator with both linear and quadratic interactions. As long as the linear coupling strength is much smaller than the oscillator frequency, the long time behavior of the coherence is dominated by the quadratic coupling strength g 2. The coherence decays and revives at a period , with the width of coherence peak decreasing as the temperature increases, hence providing a way to measure g 2 precisely without cooling. Unlike the case of linear coupling, here the coherence dynamics never reduces to the classical limit in which the oscillator is classical. Finally, the validity of linear coupling approximation is discussed and the coherence under Hahn-echo is evaluated.
Precision Medicine in Cancer Treatment
Precision medicine helps doctors select cancer treatments that are most likely to help patients based on a genetic understanding of their disease. Learn about the promise of precision medicine and the role it plays in cancer treatment.
HIGH PRECISION ROVIBRATIONAL SPECTROSCOPY OF OH{sup +}
Markus, Charles R.; Hodges, James N.; Perry, Adam J.; Kocheril, G. Stephen; McCall, Benjamin J. [Department of Chemistry, University of Illinois, Urbana, IL 61801 (United States); Müller, Holger S. P., E-mail: bjmccall@illinois.edu [I. Physikalisches Institut, Universität zu Köln, Zülpicher Str. 77, D-50937 Köln (Germany)
2016-02-01
The molecular ion OH{sup +} has long been known to be an important component of the interstellar medium. Its relative abundance can be used to indirectly measure cosmic ray ionization rates of hydrogen, and it is the first intermediate in the interstellar formation of water. To date, only a limited number of pure rotational transitions have been observed in the laboratory making it necessary to indirectly calculate rotational levels from high-precision rovibrational spectroscopy. We have remeasured 30 transitions in the fundamental band with MHz-level precision, in order to enable the prediction of a THz spectrum of OH{sup +}. The ions were produced in a water cooled discharge of O{sub 2}, H{sub 2}, and He, and the rovibrational transitions were measured with the technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy. These values have been included in a global fit of field free data to a {sup 3}Σ{sup −} linear molecule effective Hamiltonian to determine improved spectroscopic parameters which were used to predict the pure rotational transition frequencies.
1990-05-20
and U can be found by Gauss elimiunation. Using estimated reference states, the control law is uk = - Kzk + (U + KX).,,k (3.51) Linear Quadratic...gain F. An estimator may be used for the reference input states (Eq. 3.48), giving a controller Uk = - Kzk -t- Fk (3.57) Results and conclusions Figure...the following state-space equations: Zk+l = kZk +rkuk+w, (4.12) Zk = HkZk + vk (4.13) where zk is the system state vector, ti, is the propagation matrix
Quantum Kalman filtering and the Heisenberg limit in atomic magnetometry
Geremia, J M; Stockton, John K; Doherty, Andrew C; Mabuchi, Hideo [Norman Bridge Laboratory of Physics, California Institute of Technology, Pasadena, California, 91125 (United States)
2003-12-19
The shot-noise detection limit in current high-precision magnetometry [I. Kominis, T. Kornack, J. Allred, and M. Romalis, Nature (London) 422, 596 (2003)]10.1038/nature01484 is a manifestation of quantum fluctuations that scale as 1/{radical}(N) in an ensemble of N atoms. Here, we develop a procedure that combines continuous measurement and quantum Kalman filtering [V. Belavkin, Rep. Math. Phys. 43, 405 (1999)] to surpass this conventional limit by exploiting conditional spin squeezing to achieve 1/N field sensitivity. Our analysis demonstrates the importance of optimal estimation for high bandwidth precision magnetometry at the Heisenberg limit and also identifies an approximate estimator based on linear regression.
Precision Medicine in Cardiovascular Diseases
Yan Liu
2017-02-01
Full Text Available Since President Obama announced the Precision Medicine Initiative in the United States, more and more attention has been paid to precision medicine. However, clinicians have already used it to treat conditions such as cancer. Many cardiovascular diseases have a familial presentation, and genetic variants are associated with the prevention, diagnosis, and treatment of cardiovascular diseases, which are the basis for providing precise care to patients with cardiovascular diseases. Large-scale cohorts and multiomics are critical components of precision medicine. Here we summarize the application of precision medicine to cardiovascular diseases based on cohort and omic studies, and hope to elicit discussion about future health care.
Limits of Precision for Human Eye Motor Control
1989-11-01
APE (Watt & Andrews, 1981) or a staircase method similar to PEST (Taylor & Creelman , 1967) were used. The results from these different methods of...Freeman StCyr, G.J. & Fender, D.H. (1969) The interplay of drifts and flicks in binocular fixation. Vision Res. 9, 245-265 Taylor, M.M. & Creelman , C.D
Precisely predictable Dirac observables
Cordes, Heinz Otto
2006-01-01
This work presents a "Clean Quantum Theory of the Electron", based on Dirac’s equation. "Clean" in the sense of a complete mathematical explanation of the well known paradoxes of Dirac’s theory, and a connection to classical theory, including the motion of a magnetic moment (spin) in the given field, all for a charged particle (of spin ½) moving in a given electromagnetic field. This theory is relativistically covariant, and it may be regarded as a mathematically consistent quantum-mechanical generalization of the classical motion of such a particle, à la Newton and Einstein. Normally, our fields are time-independent, but also discussed is the time-dependent case, where slightly different features prevail. A "Schroedinger particle", such as a light quantum, experiences a very different (time-dependent) "Precise Predictablity of Observables". An attempt is made to compare both cases. There is not the Heisenberg uncertainty of location and momentum; rather, location alone possesses a built-in uncertainty ...
Prompt and Precise Prototyping
2003-01-01
For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.
Precisely Tracking Childhood Death.
Farag, Tamer H; Koplan, Jeffrey P; Breiman, Robert F; Madhi, Shabir A; Heaton, Penny M; Mundel, Trevor; Ordi, Jaume; Bassat, Quique; Menendez, Clara; Dowell, Scott F
2017-07-01
Little is known about the specific causes of neonatal and under-five childhood death in high-mortality geographic regions due to a lack of primary data and dependence on inaccurate tools, such as verbal autopsy. To meet the ambitious new Sustainable Development Goal 3.2 to eliminate preventable child mortality in every country, better approaches are needed to precisely determine specific causes of death so that prevention and treatment interventions can be strengthened and focused. Minimally invasive tissue sampling (MITS) is a technique that uses needle-based postmortem sampling, followed by advanced histopathology and microbiology to definitely determine cause of death. The Bill & Melinda Gates Foundation is supporting a new surveillance system called the Child Health and Mortality Prevention Surveillance network, which will determine cause of death using MITS in combination with other information, and yield cause-specific population-based mortality rates, eventually in up to 12-15 sites in sub-Saharan Africa and south Asia. However, the Gates Foundation funding alone is not enough. We call on governments, other funders, and international stakeholders to expand the use of pathology-based cause of death determination to provide the information needed to end preventable childhood mortality.
Hedstrom, Marvin
2001-01-01
.... German historian Hans Delbruck's two strategies of warfare: annihilation and exhaustion, and American military theorist Robert Leonhard's concepts of attrition and maneuver are examined to establish the relationship...
Goldowsky, Michael P. (Inventor)
1987-01-01
A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...
Full-wave current conveyor precision rectifier
Đukić Slobodan R.
2008-01-01
Full Text Available A circuit that provides precision rectification of small signal with low temperature sensitivity for frequencies up to 100 kHz without waveform distortion is presented. It utilizes an improved second type current conveyor based on current-steering output stage and biased silicon diodes. The use of a DC current source to bias the rectifying diodes provides higher temperature stability and lower DC offset level at the output. Proposed design of the precision rectifier ensures good current transfer linearity in the range that satisfy class A of the amplifier and good voltage transfer characteristic for low level signals. Distortion during the zero crossing of the input signal is practically eliminated. Design of the proposed rectifier is realized with standard components.
precision deburring using NC and robot equipment. Final report
Gillespie, L.K.
1980-05-01
Deburring precision miniature components is often time consuming and inconsistent. Although robots are available for deburring parts, they are not precise enough for precision miniature parts. Numerical control (NC) machining can provide edge break consistencies to meet requirements such as 76.2-..mu..m maximum edge break (chamfer). Although NC machining has a number of technical limitations which prohibits its use on many geometries, it can be an effective approach to features that are particularly difficult to deburr.
Apparatus for precision micromachining with lasers
Chang, J.J.; Dragon, E.P.; Warner, B.E.
1998-04-28
A new material processing apparatus using a short-pulsed, high-repetition-rate visible laser for precision micromachining utilizes a near diffraction limited laser, a high-speed precision two-axis tilt-mirror for steering the laser beam, an optical system for either focusing or imaging the laser beam on the part, and a part holder that may consist of a cover plate and a back plate. The system is generally useful for precision drilling, cutting, milling and polishing of metals and ceramics, and has broad application in manufacturing precision components. Precision machining has been demonstrated through percussion drilling and trepanning using this system. With a 30 W copper vapor laser running at multi-kHz pulse repetition frequency, straight parallel holes with size varying from 500 microns to less than 25 microns and with aspect ratios up to 1:40 have been consistently drilled with good surface finish on a variety of metals. Micromilling and microdrilling on ceramics using a 250 W copper vapor laser have also been demonstrated with good results. Materialographic sections of machined parts show little (submicron scale) recast layer and heat affected zone. 1 fig.
Precise Truss Assembly Using Commodity Parts and Low Precision Welding
Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus
2014-01-01
Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.
[Precision nutrition in the era of precision medicine].
Chen, P Z; Wang, H
2016-12-06
Precision medicine has been increasingly incorporated into clinical practice and is enabling a new era for disease prevention and treatment. As an important constituent of precision medicine, precision nutrition has also been drawing more attention during physical examinations. The main aim of precision nutrition is to provide safe and efficient intervention methods for disease treatment and management, through fully considering the genetics, lifestyle (dietary, exercise and lifestyle choices), metabolic status, gut microbiota and physiological status (nutrient level and disease status) of individuals. Three major components should be considered in precision nutrition, including individual criteria for sufficient nutritional status, biomarker monitoring or techniques for nutrient detection and the applicable therapeutic or intervention methods. It was suggested that, in clinical practice, many inherited and chronic metabolic diseases might be prevented or managed through precision nutritional intervention. For generally healthy populations, because lifestyles, dietary factors, genetic factors and environmental exposures vary among individuals, precision nutrition is warranted to improve their physical activity and reduce disease risks. In summary, research and practice is leading toward precision nutrition becoming an integral constituent of clinical nutrition and disease prevention in the era of precision medicine.
Precision medicine in myasthenia graves: begin from the data precision
Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng
2016-01-01
Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759
Beam dynamics in the final focus section of the future linear collider
AUTHOR|(SzGeCERN)739431; TOMAS, Rogelio
The exploration of new physics in the ``Tera electron-Volt''~(TeV) scale with precision measurements requires lepton colliders providing high luminosities to obtain enough statistics for the particle interaction analysis. In order to achieve design luminosity values, linear colliders feature nanometer beam spot sizes at the Interaction~Point~(IP).\\par In addition to several effects affecting the luminosity, three main issues to achieve the beam size demagnification in the Final Focus Section (FFS) of the accelerator are the chromaticity correction, the synchrotron radiation effects and the correction of the lattice errors.\\par This thesis considers two important aspects for linear colliders: push the limits of linear colliders design, in particular the chromaticity correction and the radiation effects at 3~TeV, and the instrumentation and experimental work on beam stabilization in a test facility.\\par The current linear collider projects, CLIC~\\cite{CLICdes} and ILC~\\cite{ILCdes}, have lattices designed using...
Linear Covariance Analysis for a Lunar Lander
Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael
2017-01-01
A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.
Linear and non-linear optics of condensed matter
McLean, T.P.
1977-01-01
Part I - Linear optics: 1. General introduction. 2. Frequency dependence of epsilon(ω, k vector). 3. Wave-vector dependence of epsilon(ω, k vector). 4. Tensor character of epsilon(ω, k vector). Part II - Non-linear optics: 5. Introduction. 6. A classical theory of non-linear response in one dimension. 7. The generalization to three dimensions. 8. General properties of the polarizability tensors. 9. The phase-matching condition. 10. Propagation in a non-linear dielectric. 11. Second harmonic generation. 12. Coupling of three waves. 13. Materials and their non-linearities. 14. Processes involving energy exchange with the medium. 15. Two-photon absorption. 16. Stimulated Raman effect. 17. Electro-optic effects. 18. Limitations of the approach presented here. (author)
El Kabiri, M.; Paranthoen, P.; Rosset, L.; Lecordier, J.C. [Rouen Univ., 76 - Mont-Saint-Aignan (France)
1997-12-31
An experimental study of heat transport downstream of a linear source installed in a turbulent boundary layer is performed. Second and third order momenta of velocity and temperature fields are presented and compared to gradient-type modeling. (J.S.) 7 refs.
Antonella Del Rosso
2014-01-01
There are more than 100 of them in the LHC ring and they have a total of about 400 degrees of freedom. Each one has 4 motors and the newest ones have their own beam-monitoring pickups. Their jaws constrain the relativistic, high-energy particles to a very small transverse area and protect the machine aperture. We are speaking about the LHC collimators, those ultra-precise instruments that leave escaping unstable particles no chance. The internal structure of a new LHC collimator featuring (see red arrow) one of the beam position monitor's pickups. Designed at CERN but mostly produced by very specialised manufacturers in Europe, the LHC collimators are among the most complex elements of the accelerator. Their job is to control and safely dispose of the halo particles that are produced by unavoidable beam losses from the circulating beam core. “The LHC collimation system has been designed to ensure that beam losses in superconducting magnets remain below quench limits in al...
Quantum linear Boltzmann equation
Vacchini, Bassano; Hornberger, Klaus
2009-01-01
We review the quantum version of the linear Boltzmann equation, which describes in a non-perturbative fashion, by means of scattering theory, how the quantum motion of a single test particle is affected by collisions with an ideal background gas. A heuristic derivation of this Lindblad master equation is presented, based on the requirement of translation-covariance and on the relation to the classical linear Boltzmann equation. After analyzing its general symmetry properties and the associated relaxation dynamics, we discuss a quantum Monte Carlo method for its numerical solution. We then review important limiting forms of the quantum linear Boltzmann equation, such as the case of quantum Brownian motion and pure collisional decoherence, as well as the application to matter wave optics. Finally, we point to the incorporation of quantum degeneracies and self-interactions in the gas by relating the equation to the dynamic structure factor of the ambient medium, and we provide an extension of the equation to include internal degrees of freedom.
Emma, P.
1995-01-01
The Stanford Linear Collider (SLC) is the first and only high-energy e + e - linear collider in the world. Its most remarkable features are high intensity, submicron sized, polarized (e - ) beams at a single interaction point. The main challenges posed by these unique characteristics include machine-wide emittance preservation, consistent high intensity operation, polarized electron production and transport, and the achievement of a high degree of beam stability on all time scales. In addition to serving as an important machine for the study of Z 0 boson production and decay using polarized beams, the SLC is also an indispensable source of hands-on experience for future linear colliders. Each new year of operation has been highlighted with a marked improvement in performance. The most significant improvements for the 1994-95 run include new low impedance vacuum chambers for the damping rings, an upgrade to the optics and diagnostics of the final focus systems, and a higher degree of polarization from the electron source. As a result, the average luminosity has nearly doubled over the previous year with peaks approaching 10 30 cm -2 s -1 and an 80% electron polarization at the interaction point. These developments as well as the remaining identifiable performance limitations will be discussed
Elekta Precise Table characteristics of IGRT remote table positioning
Riis, Hans L.; Zimmermann, Sune J.
2009-01-01
Cone beam CT is a powerful tool to ensure an optimum patient positioning in radiotherapy. When cone beam CT scan of a patient is acquired, scan data of the patient are compared and evaluated against a reference image set and patient position offset is calculated. Via the linac control system, the patient is moved to correct for position offset and treatment starts. This procedure requires a reliable system for movement of patient. In this work we present a new method to characterize the reproducibility, linearity and accuracy in table positioning. The method applies to all treatment tables used in radiotherapy. Material and methods. The table characteristics are investigated on our two recent Elekta Synergy Platforms equipped with Precise Table installed in a shallow pit concrete cavity. Remote positioning of the table uses the auto set-up (ASU) feature in the linac control system software Desktop Pro R6.1. The ASU is used clinically to correct for patient positioning offset calculated via cone beam CT (XVI)-software. High precision steel rulers and a USB-microscope has been used to detect the relative table position in vertical, lateral and longitudinal direction. The effect of patient is simulated by applying external load on the iBEAM table top. For each table position an image is exposed of the ruler and display values of actual table position in the linac control system is read out. The table is moved in full range in lateral direction (50 cm) and longitudinal direction (100 cm) while in vertical direction a limited range is used (40 cm). Results and discussion. Our results show a linear relation between linac control system read out and measured position. Effects of imperfect calibration are seen. A reproducibility within a standard deviation of 0.22 mm in lateral and longitudinal directions while within 0.43 mm in vertical direction has been observed. The usage of XVI requires knowledge of the characteristics of remote table positioning. It is our opinion
Hidden SUSY from precision gauge unification
Krippendorf, Sven; Nilles, Hans Peter [Bonn Univ. (Germany). Bethe Center for Theoretical Physics; Bonn Univ. (Germany). Physikalisches Inst.; Ratz, Michael [Technische Univ. Muenchen, Garching (Germany). Physik-Department; Winkler, Martin Wolfgang [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2013-06-15
We revisit the implications of naturalness and gauge unification in the MSSM. We find that precision unification of the couplings in connection with a small {mu} parameter requires a highly compressed gaugino pattern as it is realized in mirage mediation. Due to the small mass difference between gluino and LSP, collider limits on the gluino mass are drastically relaxed. Without further assumptions, the relic density of the LSP is very close to the observed dark matter density due to coannihilation effects.
Hidden SUSY from precision gauge unification
Krippendorf, Sven; Nilles, Hans Peter
2013-06-01
We revisit the implications of naturalness and gauge unification in the MSSM. We find that precision unification of the couplings in connection with a small μ parameter requires a highly compressed gaugino pattern as it is realized in mirage mediation. Due to the small mass difference between gluino and LSP, collider limits on the gluino mass are drastically relaxed. Without further assumptions, the relic density of the LSP is very close to the observed dark matter density due to coannihilation effects.
BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS
Krogh, F. T.
1994-01-01
The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.
Beckmann, Moritz
2013-12-15
At the planned International Linear Collider (ILC), the longitudinal beam polarization needs to be determined with an unprecedented precision. For that purpose, the beam delivery systems (BDS) are equipped with two laser Compton polarimeters each, which are foreseen to achieve a systematic uncertainty of {<=} 0.25 %. The polarimeters are located 1.6 km upstream and 150 m downstream of the e{sup +}e{sup -} interaction point (IP). The average luminosity-weighted longitudinal polarization P{sup lumi}{sub z}, which is the decisive quantity for the experiments, has to be determined from these measurements with the best possible precision. Therefore, a detailed understanding of the spin transport in the BDS is mandatory to estimate how precise the longitudinal polarization at the IP is known from the polarimeter measurements. The envisaged precision for the propagation of the measurement value is {<=} 0.1 %. This thesis scrutinizes the spin transport in view of the achievable precision. A detailed beamline simulation for the BDS has been developed, including the simulation of the beam-beam collisions at the IP. The following factors which might limit the achievable precision is investigated: a variation of the beam parameters, the beam alignment precision at the polarimeters and the IP, the bunch rotation at the IP, the detector magnets, the beam-beam collisions, the emission of synchrotron radiation and misalignments of the beamline elements. In absence of collisions, a precision of 0.085% on the propagation of the measured longitudinal polarization has been found achievable. This result however depends mainly on the presumed precisions for the parallel alignment of the beam at the polarimeters and for the alignment of polarization vector. In presence of collisions, the measurement at the downstream polarimeter depends strongly on the intensity of the collision and the size of the polarimeter laser spot. Therefore, a more detailed study of the laser-bunch interaction is
MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.
Harvard Univ., Cambridge, MA. Harvard Project Physics.
THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…
Study and program implementation of transient curves' piecewise linearization
Shi Yang; Zu Hongbiao
2014-01-01
Background: Transient curves are essential for the stress analysis of related equipment in nuclear power plant (NPP). The actually operating data or the design transient data of a NPP usually consist of a large number of data points with very short time intervals. To simplify the analysis, transient curves are generally piecewise linearized in advance. Up to now, the piecewise linearization of transient curves is accomplished manually, Purpose: The aim is to develop a method for the piecewise linearization of transient curves, and to implement it by programming. Methods: First of all, the fitting line of a number of data points was obtained by the least square method. The segment of the fitting line is set while the accumulation error of linearization exceeds the preset limit with the increasing number of points. Then the linearization of subsequent data points was begun from the last point of the preceding curve segment to get the next segment in the same way, and continue until the final data point involved. Finally, averaging of junction points is taken for the segment connection. Results: A computer program named PLTC (Piecewise Linearization for Transient Curves) was implemented and verified by the linearization of the standard sine curve and typical transient curves of a NPP. Conclusion: The method and the PLTC program can be well used to the piecewise linearization of transient curves, with improving efficiency and precision. (authors)
Precision medicine for nurses: 101.
Lemoine, Colleen
2014-05-01
To introduce the key concepts and terms associated with precision medicine and support understanding of future developments in the field by providing an overview and history of precision medicine, related ethical considerations, and nursing implications. Current nursing, medical and basic science literature. Rapid progress in understanding the oncogenic drivers associated with cancer is leading to a shift toward precision medicine, where treatment is based on targeting specific genetic and epigenetic alterations associated with a particular cancer. Nurses will need to embrace the paradigm shift to precision medicine, expend the effort necessary to learn the essential terminology, concepts and principles, and work collaboratively with physician colleagues to best position our patients to maximize the potential that precision medicine can offer. Copyright © 2014 Elsevier Inc. All rights reserved.
Linearly constrained minimax optimization
Madsen, Kaj; Schjær-Jacobsen, Hans
1978-01-01
We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...
Advanced bioanalytics for precision medicine.
Roda, Aldo; Michelini, Elisa; Caliceti, Cristiana; Guardigli, Massimo; Mirasoli, Mara; Simoni, Patrizia
2018-01-01
Precision medicine is a new paradigm that combines diagnostic, imaging, and analytical tools to produce accurate diagnoses and therapeutic interventions tailored to the individual patient. This approach stands in contrast to the traditional "one size fits all" concept, according to which researchers develop disease treatments and preventions for an "average" patient without considering individual differences. The "one size fits all" concept has led to many ineffective or inappropriate treatments, especially for pathologies such as Alzheimer's disease and cancer. Now, precision medicine is receiving massive funding in many countries, thanks to its social and economic potential in terms of improved disease prevention, diagnosis, and therapy. Bioanalytical chemistry is critical to precision medicine. This is because identifying an appropriate tailored therapy requires researchers to collect and analyze information on each patient's specific molecular biomarkers (e.g., proteins, nucleic acids, and metabolites). In other words, precision diagnostics is not possible without precise bioanalytical chemistry. This Trend article highlights some of the most recent advances, including massive analysis of multilayer omics, and new imaging technique applications suitable for implementing precision medicine. Graphical abstract Precision medicine combines bioanalytical chemistry, molecular diagnostics, and imaging tools for performing accurate diagnoses and selecting optimal therapies for each patient.
Squares of Random Linear Codes
Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego
2015-01-01
a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise......Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...
Linear inflation from quartic potential
Kannike, Kristjan; Racioppi, Antonio [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Raidal, Martti [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Tartu (Estonia)
2016-01-07
We show that if the inflaton has a non-minimal coupling to gravity and the Planck scale is dynamically generated, the results of Coleman-Weinberg inflation are confined in between two attractor solutions: quadratic inflation, which is ruled out by the recent measurements, and linear inflation which, instead, is in the experimental allowed region. The minimal scenario has only one free parameter — the inflaton’s non-minimal coupling to gravity — that determines all physical parameters such as the tensor-to-scalar ratio and the reheating temperature of the Universe. Should the more precise future measurements of inflationary parameters point towards linear inflation, further interest in scale-invariant scenarios would be motivated.
Linear polarized fluctuations in the cosmic microwave background
Partridge, R.B.; Nowakowski, J.; Martin, H.M.
1988-01-01
We report here limits on the linear (and circular) polarization of the cosmic microwave background on small angular scales, 18''≤ θ ≤ 160''. The limits are based on radio maps of Stokes parameters and polarisation (linear and circular). (author)
The precision of higgs boson measurements and their implications
J. Conway et al. email = crathbun@fnal.gov
2002-01-01
The prospects for a precise exploration of the properties of a single or many observed Higgs bosons at future accelerators are summarized, with particular emphasis on the abilities of a Linear Collider (LC). Some implications of these measurements for discerning new physics beyond the Standard Model (SM) are also discussed
Dynamics of number systems computation with arbitrary precision
Kurka, Petr
2016-01-01
This book is a source of valuable and useful information on the topics of dynamics of number systems and scientific computation with arbitrary precision. It is addressed to scholars, scientists and engineers, and graduate students. The treatment is elementary and self-contained with relevance both for theory and applications. The basic prerequisite of the book is linear algebra and matrix calculus. .
Precise stacking and bonding technology for RDDS structure
Higo, T; Toge, N.; Suzuki, T.
2000-01-01
The X-band accelerating structures called RDDS1 (Rounded Dumped Detuned Structure) for the linear collider have been developed. The main body of RDDS1 was successfully fabricated in Japan (KEK, IHI). We established basic fabrication techniques through the development of prototype structures including RDDS1. The precise stacking and bonding technologies for RDDS structure are presented in this paper. (author)
Numerical precision control and GRACE
Fujimoto, J.; Hamaguchi, N.; Ishikawa, T.; Kaneko, T.; Morita, H.; Perret-Gallix, D.; Tokura, A.; Shimizu, Y.
2006-01-01
The control of the numerical precision of large-scale computations like those generated by the GRACE system for automatic Feynman diagram calculations has become an intrinsic part of those packages. Recently, Hitachi Ltd. has developed in FORTRAN a new library HMLIB for quadruple and octuple precision arithmetic where the number of lost-bits is made available. This library has been tested with success on the 1-loop radiative correction to e + e - ->e + e - τ + τ - . It is shown that the approach followed by HMLIB provides an efficient way to track down the source of numerical significance losses and to deliver high-precision results yet minimizing computing time
Precision medicine for cancer with next-generation functional diagnostics.
Friedman, Adam A; Letai, Anthony; Fisher, David E; Flaherty, Keith T
2015-12-01
Precision medicine is about matching the right drugs to the right patients. Although this approach is technology agnostic, in cancer there is a tendency to make precision medicine synonymous with genomics. However, genome-based cancer therapeutic matching is limited by incomplete biological understanding of the relationship between phenotype and cancer genotype. This limitation can be addressed by functional testing of live patient tumour cells exposed to potential therapies. Recently, several 'next-generation' functional diagnostic technologies have been reported, including novel methods for tumour manipulation, molecularly precise assays of tumour responses and device-based in situ approaches; these address the limitations of the older generation of chemosensitivity tests. The promise of these new technologies suggests a future diagnostic strategy that integrates functional testing with next-generation sequencing and immunoprofiling to precisely match combination therapies to individual cancer patients.
Ultra-Low-Dropout Linear Regulator
Thornton, Trevor; Lepkowski, William; Wilk, Seth
2011-01-01
A radiation-tolerant, ultra-low-dropout linear regulator can operate between -150 and 150 C. Prototype components were demonstrated to be performing well after a total ionizing dose of 1 Mrad (Si). Unlike existing components, the linear regulator developed during this activity is unconditionally stable over all operating regimes without the need for an external compensation capacitor. The absence of an external capacitor reduces overall system mass/volume, increases reliability, and lowers cost. Linear regulators generate a precisely controlled voltage for electronic circuits regardless of fluctuations in the load current that the circuit draws from the regulator.
Foundations of linear and generalized linear models
Agresti, Alan
2015-01-01
A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,
Feedback systems for linear colliders
Hendrickson, L; Himel, Thomas M; Minty, Michiko G; Phinney, N; Raimondi, Pantaleo; Raubenheimer, T O; Shoaee, H; Tenenbaum, P G
1999-01-01
Feedback systems are essential for stable operation of a linear collider, providing a cost-effective method for relaxing tight tolerances. In the Stanford Linear Collider (SLC), feedback controls beam parameters such as trajectory, energy, and intensity throughout the accelerator. A novel dithering optimization system which adjusts final focus parameters to maximize luminosity contributed to achieving record performance in the 1997-98 run. Performance limitations of the steering feedback have been investigated, and improvements have been made. For the Next Linear Collider (NLC), extensive feedback systems are planned as an intregal part of the design. Feedback requiremetns for JLC (the Japanese Linear Collider) are essentially identical to NLC; some of the TESLA requirements are similar but there are significant differences. For NLC, algorithms which incorporate improvements upon the SLC implementation are being prototyped. Specialized systems for the damping rings, rf and interaction point will operate at hi...
Lung Cancer Precision Medicine Trials
Patients with lung cancer are benefiting from the boom in targeted and immune-based therapies. With a series of precision medicine trials, NCI is keeping pace with the rapidly changing treatment landscape for lung cancer.
Precision engineering: an evolutionary perspective.
Evans, Chris J
2012-08-28
Precision engineering is a relatively new name for a technology with roots going back over a thousand years; those roots span astronomy, metrology, fundamental standards, manufacturing and money-making (literally). Throughout that history, precision engineers have created links across disparate disciplines to generate innovative responses to society's needs and wants. This review combines historical and technological perspectives to illuminate precision engineering's current character and directions. It first provides us a working definition of precision engineering and then reviews the subject's roots. Examples will be given showing the contributions of the technology to society, while simultaneously showing the creative tension between the technological convergence that spurs new directions and the vertical disintegration that optimizes manufacturing economics.
How GNSS Enables Precision Farming
2014-12-01
Precision farming: Feeding a Growing Population Enables Those Who Feed the World. Immediate and Ongoing Needs - population growth (more to feed) - urbanization (decrease in arable land) Double food production by 2050 to meet world demand. To meet thi...
Scalar-tensor linear inflation
Artymowski, Michał [Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków (Poland); Racioppi, Antonio, E-mail: Michal.Artymowski@uj.edu.pl, E-mail: Antonio.Racioppi@kbfi.ee [National Institute of Chemical Physics and Biophysics, Rävala 10, 10143 Tallinn (Estonia)
2017-04-01
We investigate two approaches to non-minimally coupled gravity theories which present linear inflation as attractor solution: a) the scalar-tensor theory approach, where we look for a scalar-tensor theory that would restore results of linear inflation in the strong coupling limit for a non-minimal coupling to gravity of the form of f (φ) R /2; b) the particle physics approach, where we motivate the form of the Jordan frame potential by loop corrections to the inflaton field. In both cases the Jordan frame potentials are modifications of the induced gravity inflationary scenario, but instead of the Starobinsky attractor they lead to linear inflation in the strong coupling limit.
Fiber Scrambling for High Precision Spectrographs
Kaplan, Zachary; Spronck, J. F. P.; Fischer, D.
2011-05-01
The detection of Earth-like exoplanets with the radial velocity method requires extreme Doppler precision and long-term stability in order to measure tiny reflex velocities in the host star. Recent planet searches have led to the detection of so called "super-Earths” (up to a few Earth masses) that induce radial velocity changes of about 1 m/s. However, the detection of true Earth analogs requires a precision of 10 cm/s. One of the largest factors limiting Doppler precision is variation in the Point Spread Function (PSF) from observation to observation due to changes in the illumination of the slit and spectrograph optics. Thus, this stability has become a focus of current instrumentation work. Fiber optics have been used since the 1980's to couple telescopes to high-precision spectrographs, initially for simpler mechanical design and control. However, fiber optics are also naturally efficient scramblers. Scrambling refers to a fiber's ability to produce an output beam independent of input. Our research is focused on characterizing the scrambling properties of several types of fibers, including circular, square and octagonal fibers. By measuring the intensity distribution after the fiber as a function of input beam position, we can simulate guiding errors that occur at an observatory. Through this, we can determine which fibers produce the most uniform outputs for the severest guiding errors, improving the PSF and allowing sub-m/s precision. However, extensive testing of fibers of supposedly identical core diameter, length and shape from the same manufacturer has revealed the "personality” of individual fibers. Personality describes differing intensity patterns for supposedly duplicate fibers illuminated identically. Here, we present our results on scrambling characterization as a function of fiber type, while studying individual fiber personality.
Proposal for a CLEO precision vertex detector
1991-01-01
Fermilab experiment E691 and CERN experiment NA32 have demonstrated the enormous power of precision vertexing for studying heavy quark physics. Nearly all collider experiments now have or are installing precision vertex detectors. This is a proposal for a precision vertex detector for CLEO, which will be the pre-eminent heavy quark experiment for at least the next 5 years. The purpose of a precision vertex detector for CLEO is to enhance the capabilities for isolating B, charm, and tau decays and to make it possible to measure the decay time. The precision vertex detector will also significantly improve strange particle identification and help with the tracking. The installation and use of this detector at CLEO is an important step in developing a vertex detector for an asymmetric B factory and therefore in observing CP violation in B decays. The CLEO environment imposes a number of unique conditions and challenges. The machine will be operating near the γ (4S) in energy. This means that B's are produced with a very small velocity and travel a distance about 1/2 that of the expected vertex position resolution. As a consequence B decay time information will not be useful for most physics. On the other hand, the charm products of B decays have a higher velocity. For the long lived D + in particular, vertex information can be used to isolate the charm particle on an event-by-event basis. This helps significantly in reconstructing B's. The vertex resolution for D's from B's is limited by multiple Coulomb scattering of the necessarily rather low momentum tracks. As a consequence it is essential to minimize the material, as measured in radiation lengths, in the beam pip and the vertex detector itself. It is also essential to build the beam pipe and detector with the smallest possible radius
Precision experiments with antihydrogen: an outlook
Doser, Michael
2011-01-01
After a first generation of experiments has demonstrated the feasibility of forming - in a controlled manner - low-energy antihydrogen atoms via several different techniques, a second generation of experiments is now attempting to trap sufficiently cold atoms, or to form an atomic beam of antihydrogen atoms. The goal of these experiments is to carry out comparative precision spectroscopy between hydrogen and antihydrogen, in view of testing the CPT theorem, either through 1S-2S spectroscopy or via a measurement of the hyperfine splitting of the ground state of antihydrogen. A related class of experiments combines techniques from these experiments with recent developments in the formation of positronium to test the gravitational interaction between matter and antimatter. A significant number of challenges and limitations will still need to be overcome before precision measurements with antihydrogen become feasible, with the next significant milestones being either trapping of antihydrogen or the formation of a beam of antihydrogen.
Munehiro, H
1980-05-29
When driving the carriage of a printer through a rotating motor, there are problems regarding the limited accuracy of the carriage position due to rotation or contraction and ageing of the cable. In order to solve the problem, a direct drive system was proposed, in which the printer carriage is driven by a linear motor. If one wants to keep the motor circuit of such a motor compact, then the magnetic flux density in the air gap must be reduced or the motor travel must be reduced. It is the purpose of this invention to create an electrodynamic linear motor, which on the one hand is compact and light and on the other hand has a relatively high constant force over a large travel. The invention is characterised by the fact that magnetic fields of alternating polarity are generated at equal intervals in the magnetic field, and that the coil arrangement has 2 adjacent coils, whose size corresponds to half the length of each magnetic pole. A logic circuit is provided to select one of the two coils and to determine the direction of the current depending on the signals of a magnetic field sensor on the coil arrangement.
FROM PERSONALIZED TO PRECISION MEDICINE
K. V. Raskina
2017-01-01
Full Text Available The need to maintain a high quality of life against a backdrop of its inevitably increasing duration is one of the main problems of modern health care. The concept of "right drug to the right patient at the right time", which at first was bearing the name "personalized", is currently unanimously approved by international scientific community as "precision medicine". Precision medicine takes all the individual characteristics into account: genes diversity, environment, lifestyles, and even bacterial microflora and also involves the use of the latest technological developments, which serves to ensure that each patient gets assistance fitting his state best. In the United States, Canada and France national precision medicine programs have already been submitted and implemented. The aim of this review is to describe the dynamic integration of precision medicine methods into routine medical practice and life of modern society. The new paradigm prospects description are complemented by figures, proving the already achieved success in the application of precise methods for example, the targeted therapy of cancer. All in all, the presence of real-life examples, proving the regularity of transition to a new paradigm, and a wide range of technical and diagnostic capabilities available and constantly evolving make the all-round transition to precision medicine almost inevitable.
Wang, Guochao; Xie, Xuedong; Yan, Shuhua
2010-10-01
Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .
Zouain, N.
1983-01-01
The static method for the evaluation of the limit loads of a perfectly elasto-plastic structure is presented. Using the static theorem of Limit Analysis and the Finite Element Method, a lower bound for the colapso load can be obtained through a linear programming problem. This formulation if then applied to symmetrically loaded shells of revolution and some numerical results of limit loads in nozzles are also presented. (Author) [pt
Memristance controlling approach based on modification of linear M—q curve
Liu Hai-Jun; Li Zhi-Wei; Yu Hong-Qi; Sun Zhao-Lin; Nie Hong-Shan
2014-01-01
The memristor has broad application prospects in many fields, while in many cases, those fields require accurate impedance control. The nonlinear model is of great importance for realizing memristance control accurately, but the implementing complexity caused by iteration has limited the actual application of this model. Considering the approximate linear characteristics at the middle region of the memristance-charge (M—q) curve of the nonlinear model, this paper proposes a memristance controlling approach, which is achieved by linearizing the middle region of the M—q curve of the nonlinear memristor, and establishes the linear relationship between memristances M and input excitations so that it can realize impedance control precisely by only adjusting input signals briefly. First, it analyzes the feasibility for linearizing the middle part of the M—q curve of the memristor with a nonlinear model from the qualitative perspective. Then, the linearization equations of the middle region of the M—q curve is constructed by using the shift method, and under a sinusoidal excitation case, the analytical relation between the memristance M and the charge time t is derived through the Taylor series expansions. At last, the performance of the proposed approach is demonstrated, including the linearizing capability for the middle part of the M—q curve of the nonlinear model memristor, the controlling ability for memristance M, and the influence of input excitation on linearization errors. (interdisciplinary physics and related areas of science and technology)
Precision Magnetic Elements for the SNS Storage Ring
Danby, G.; Jackson, J.; Spataro, C.
1999-01-01
Magnetic elements for an accumulator storage ring for a 1 GeV Spallation Neutron Source (SNS) have been under design. The accumulation of very high intensity protons in a storage ring requires beam optical elements of very high purity to minimize higher order resonances in the presence of space charge. The parameters of the elements required by the accumulator lattice design have been reported. The dipoles have a 17 cm gap and are 124 cm long. The quadrupoles have a physical length to aperture diameter ratio of 40 cm/21 cm and of 45 cm/31 cm. Since the elements have a large aperture and short length, optimizing the optical effects of magnet ends is the major design challenge. Two dimensional (2D) computer computations can, at least on paper, produce the desired accuracy internal to magnets, i.e. constant dipole fields and linear quadrupole gradients over the desired aperture to 1 x 10 -4 . To minimize undesirable end effects three dimensional (3D) computations can be used to design magnet ends. However, limitations on computations can occur, such as necessary finite boundary conditions, actual properties of the iron employed, hysteresis effects, etc., which are slightly at variance with the assumed properties. Experimental refinement is employed to obtain the desired precision
Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling.
Kawashima, Issaku; Kumano, Hiroaki
2017-01-01
Mind-wandering (MW), task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG) variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR) to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.
Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling
Issaku Kawashima
2017-07-01
Full Text Available Mind-wandering (MW, task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.
Nanomaterials for Cancer Precision Medicine.
Wang, Yilong; Sun, Shuyang; Zhang, Zhiyuan; Shi, Donglu
2018-04-01
Medical science has recently advanced to the point where diagnosis and therapeutics can be carried out with high precision, even at the molecular level. A new field of "precision medicine" has consequently emerged with specific clinical implications and challenges that can be well-addressed by newly developed nanomaterials. Here, a nanoscience approach to precision medicine is provided, with a focus on cancer therapy, based on a new concept of "molecularly-defined cancers." "Next-generation sequencing" is introduced to identify the oncogene that is responsible for a class of cancers. This new approach is fundamentally different from all conventional cancer therapies that rely on diagnosis of the anatomic origins where the tumors are found. To treat cancers at molecular level, a recently developed "microRNA replacement therapy" is applied, utilizing nanocarriers, in order to regulate the driver oncogene, which is the core of cancer precision therapeutics. Furthermore, the outcome of the nanomediated oncogenic regulation has to be accurately assessed by the genetically characterized, patient-derived xenograft models. Cancer therapy in this fashion is a quintessential example of precision medicine, presenting many challenges to the materials communities with new issues in structural design, surface functionalization, gene/drug storage and delivery, cell targeting, and medical imaging. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Precision Medicine and Men's Health.
Mata, Douglas A; Katchi, Farhan M; Ramasamy, Ranjith
2017-07-01
Precision medicine can greatly benefit men's health by helping to prevent, diagnose, and treat prostate cancer, benign prostatic hyperplasia, infertility, hypogonadism, and erectile dysfunction. For example, precision medicine can facilitate the selection of men at high risk for prostate cancer for targeted prostate-specific antigen screening and chemoprevention administration, as well as assist in identifying men who are resistant to medical therapy for prostatic hyperplasia, who may instead require surgery. Precision medicine-trained clinicians can also let couples know whether their specific cause of infertility should be bypassed by sperm extraction and in vitro fertilization to prevent abnormalities in their offspring. Though precision medicine's role in the management of hypogonadism has yet to be defined, it could be used to identify biomarkers associated with individual patients' responses to treatment so that appropriate therapy can be prescribed. Last, precision medicine can improve erectile dysfunction treatment by identifying genetic polymorphisms that regulate response to medical therapies and by aiding in the selection of patients for further cardiovascular disease screening.
Precision Medicine in Gastrointestinal Pathology.
Wang, David H; Park, Jason Y
2016-05-01
-Precision medicine is the promise of individualized therapy and management of patients based on their personal biology. There are now multiple global initiatives to perform whole-genome sequencing on millions of individuals. In the United States, an early program was the Million Veteran Program, and a more recent proposal in 2015 by the president of the United States is the Precision Medicine Initiative. To implement precision medicine in routine oncology care, genetic variants present in tumors need to be matched with effective clinical therapeutics. When we focus on the current state of precision medicine for gastrointestinal malignancies, it becomes apparent that there is a mixed history of success and failure. -To present the current state of precision medicine using gastrointestinal oncology as a model. We will present currently available targeted therapeutics, promising new findings in clinical genomic oncology, remaining quality issues in genomic testing, and emerging oncology clinical trial designs. -Review of the literature including clinical genomic studies on gastrointestinal malignancies, clinical oncology trials on therapeutics targeted to molecular alterations, and emerging clinical oncology study designs. -Translating our ability to sequence thousands of genes into meaningful improvements in patient survival will be the challenge for the next decade.
A Miniaturized Colorimeter with a Novel Design and High Precision for Photometric Detection.
Yan, Jun-Chao; Chen, Yan; Pang, Yu; Slavik, Jan; Zhao, Yun-Fei; Wu, Xiao-Ming; Yang, Yi; Yang, Si-Fan; Ren, Tian-Ling
2018-03-08
Water quality detection plays an increasingly important role in environmental protection. In this work, a novel colorimeter based on the Beer-Lambert law was designed for chemical element detection in water with high precision and miniaturized structure. As an example, the colorimeter can detect phosphorus, which was accomplished in this article to evaluate the performance. Simultaneously, a modified algorithm was applied to extend the linear measurable range. The colorimeter encompassed a near infrared laser source, a microflow cell based on microfluidic technology and a light-sensitive detector, then Micro-Electro-Mechanical System (MEMS) processing technology was used to form a stable integrated structure. Experiments were performed based on the ammonium molybdate spectrophotometric method, including the preparation of phosphorus standard solution, reducing agent, chromogenic agent and color reaction. The device can obtain a wide linear response range (0.05 mg/L up to 7.60 mg/L), a wide reliable measuring range up to 10.16 mg/L after using a novel algorithm, and a low limit of detection (0.02 mg/L). The size of flow cell in this design is 18 mm × 2.0 mm × 800 μm, obtaining a low reagent consumption of 0.004 mg ascorbic acid and 0.011 mg ammonium molybdate per determination. Achieving these advantages of miniaturized volume, high precision and low cost, the design can also be used in automated in situ detection.
A Miniaturized Colorimeter with a Novel Design and High Precision for Photometric Detection
Jun-Chao Yan
2018-03-01
Full Text Available Water quality detection plays an increasingly important role in environmental protection. In this work, a novel colorimeter based on the Beer-Lambert law was designed for chemical element detection in water with high precision and miniaturized structure. As an example, the colorimeter can detect phosphorus, which was accomplished in this article to evaluate the performance. Simultaneously, a modified algorithm was applied to extend the linear measurable range. The colorimeter encompassed a near infrared laser source, a microflow cell based on microfluidic technology and a light-sensitive detector, then Micro-Electro-Mechanical System (MEMS processing technology was used to form a stable integrated structure. Experiments were performed based on the ammonium molybdate spectrophotometric method, including the preparation of phosphorus standard solution, reducing agent, chromogenic agent and color reaction. The device can obtain a wide linear response range (0.05 mg/L up to 7.60 mg/L, a wide reliable measuring range up to 10.16 mg/L after using a novel algorithm, and a low limit of detection (0.02 mg/L. The size of flow cell in this design is 18 mm × 2.0 mm × 800 μm, obtaining a low reagent consumption of 0.004 mg ascorbic acid and 0.011 mg ammonium molybdate per determination. Achieving these advantages of miniaturized volume, high precision and low cost, the design can also be used in automated in situ detection.
Research on a high-precision calibration method for tunable lasers
Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai
2018-03-01
Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.
Developing and implementing a high precision setup system
Peng, Lee-Cheng
The demand for high-precision radiotherapy (HPRT) was first implemented in stereotactic radiosurgery using a rigid, invasive stereotactic head frame. Fractionated stereotactic radiotherapy (SRT) with a frameless device was developed along a growing interest in sophisticated treatment with a tight margin and high-dose gradient. This dissertation establishes the complete management for HPRT in the process of frameless SRT, including image-guided localization, immobilization, and dose evaluation. The most ideal and precise positioning system can allow for ease of relocation, real-time patient movement assessment, high accuracy, and no additional dose in daily use. A new image-guided stereotactic positioning system (IGSPS), the Align RT3C 3D surface camera system (ART, VisionRT), which combines 3D surface images and uses a real-time tracking technique, was developed to ensure accurate positioning at the first place. The uncertainties of current optical tracking system, which causes patient discomfort due to additional bite plates using the dental impression technique and external markers, are found. The accuracy and feasibility of ART is validated by comparisons with the optical tracking and cone-beam computed tomography (CBCT) systems. Additionally, an effective daily quality assurance (QA) program for the linear accelerator and multiple IGSPSs is the most important factor to ensure system performance in daily use. Currently, systematic errors from the phantom variety and long measurement time caused by switching phantoms were discovered. We investigated the use of a commercially available daily QA device to improve the efficiency and thoroughness. Reasonable action level has been established by considering dosimetric relevance and clinic flow. As for intricate treatments, the effect of dose deviation caused by setup errors remains uncertain on tumor coverage and toxicity on OARs. The lack of adequate dosimetric simulations based on the true treatment coordinates from
Linear signal noise summer accurately determines and controls S/N ratio
Sundry, J. L.
1966-01-01
Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.
Run scenarios for the linear collider
M. Battaglia et al. email = crathbun@fnal.gov
2002-01-01
We have examined how a Linear Collider program of 1000 fb -1 could be constructed in the case that a very rich program of new physics is accessible at √s ≤ 500 GeV. We have examined possible run plans that would allow the measurement of the parameters of a 120 GeV Higgs boson, the top quark, and could give information on the sparticle masses in SUSY scenarios in which many states are accessible. We find that the construction of the run plan (the specific energies for collider operation, the mix of initial state electron polarization states, and the use of special e - e - runs) will depend quite sensitively on the specifics of the supersymmetry model, as the decay channels open to particular sparticles vary drastically and discontinuously as the underlying SUSY model parameters are varied. We have explored this dependence somewhat by considering two rather closely related SUSY model points. We have called for operation at a high energy to study kinematic end points, followed by runs in the vicinity of several two body production thresholds once their location is determined by the end point studies. For our benchmarks, the end point runs are capable of disentangling most sparticle states through the use of specific final states and beam polarizations. The estimated sparticle mass precisions, combined from end point and scan data, are given in Table VIII and the corresponding estimates for the mSUGRA parameters are in Table IX. The precision for the Higgs boson mass, width, cross-sections, branching ratios and couplings are given in Table X. The errors on the top quark mass and width are expected to be dominated by the systematic limits imposed by QCD non-perturbative effects. The run plan devotes at least two thirds of the accumulated luminosity near the maximum LC energy, so that the program would be sensitive to unexpected new phenomena at high mass scales. We conclude that with a 1 ab -1 program, expected to take the first 6-7 years of LC operation, one can do
Linear feedback controls the essentials
Haidekker, Mark A
2013-01-01
The design of control systems is at the very core of engineering. Feedback controls are ubiquitous, ranging from simple room thermostats to airplane engine control. Helping to make sense of this wide-ranging field, this book provides a new approach by keeping a tight focus on the essentials with a limited, yet consistent set of examples. Analysis and design methods are explained in terms of theory and practice. The book covers classical, linear feedback controls, and linear approximations are used when needed. In parallel, the book covers time-discrete (digital) control systems and juxtapos
Precision of different quantitative ultrasound densitometers
Pocock, N.A.; Harris, N.D.; Griffiths, M.R.
1998-01-01
Full text: Quantitative ultrasound (QUS) of the calcaneus, which measures Speed of Sound (SOS) and Broadband ultrasound attenuation (BUA), is predictive of the risk of osteoporotic fracture. However, the utility of QUS for predicting fracture risk or for monitoring treatment efficacy depends on its precision and reliability. Published results and manufacturers data vary significantly due to differences in statistical methodology. We have assessed the precision of the current model of the Lunar Achilles and the McCue Cuba QUS densitometers; the most commonly used QUS machines in Australia. Twenty seven subjects had duplicate QUS measurements performed on the same day on both machines. These data were used to calculate the within pair standard deviation (SD) the co-efficient of variation, CV and the standardised co efficient of variation (sCV) which is corrected for the dynamic range. In addition, the co-efficient of reliability (R) was calculated as an index of reliability which is independent of the population mean value, or the dynamic range of the measurements. R ranges between 0 (for no reliability) to 1(for a perfect reliability). The results indicate that the precision of QUS is dependent on the dynamic range and the instrument. Furthermore, they suggest that while QUS is a useful predictor of fracture risk, at present it has limited clinical value in monitoring short term age-related bone loss of 1-2% per year
Sensing technologies for precision irrigation
Ćulibrk, Dubravko; Minic, Vladan; Alonso Fernandez, Marta; Alvarez Osuna, Javier; Crnojevic, Vladimir
2014-01-01
This brief provides an overview of state-of-the-art sensing technologies relevant to the problem of precision irrigation, an emerging field within the domain of precision agriculture. Applications of wireless sensor networks, satellite data and geographic information systems in the domain are covered. This brief presents the basic concepts of the technologies and emphasizes the practical aspects that enable the implementation of intelligent irrigation systems. The authors target a broad audience interested in this theme and organize the content in five chapters, each concerned with a specific technology needed to address the problem of optimal crop irrigation. Professionals and researchers will find the text a thorough survey with practical applications.
Precision measurement with atom interferometry
Wang Jin
2015-01-01
Development of atom interferometry and its application in precision measurement are reviewed in this paper. The principle, features and the implementation of atom interferometers are introduced, the recent progress of precision measurement with atom interferometry, including determination of gravitational constant and fine structure constant, measurement of gravity, gravity gradient and rotation, test of weak equivalence principle, proposal of gravitational wave detection, and measurement of quadratic Zeeman shift are reviewed in detail. Determination of gravitational redshift, new definition of kilogram, and measurement of weak force with atom interferometry are also briefly introduced. (topical review)
ELECTROWEAK PHYSICS AND PRECISION STUDIES
MARCIANO, W.
2005-01-01
The utility of precision electroweak measurements for predicting the Standard Model Higgs mass via quantum loop effects is discussed. Current values of m W , sin 2 θ W (m Z ) # ovr MS# and m t imply a relatively light Higgs which is below the direct experimental bound but possibly consistent with Supersymmetry expectations. The existence of Supersymmetry is further suggested by a 2σ discrepancy between experiment and theory for the muon anomalous magnetic moment. Constraints from precision studies on other types of ''New Physics'' are also briefly described
Universal precision sine bar attachment
Mann, Franklin D. (Inventor)
1989-01-01
This invention relates to an attachment for a sine bar which can be used to perform measurements during lathe operations or other types of machining operations. The attachment can be used for setting precision angles on vises, dividing heads, rotary tables and angle plates. It can also be used in the inspection of machined parts, when close tolerances are required, and in the layout of precision hardware. The novelty of the invention is believed to reside in a specific versatile sine bar attachment for measuring a variety of angles on a number of different types of equipment.
Introduction to precise numerical methods
Aberth, Oliver
2007-01-01
Precise numerical analysis may be defined as the study of computer methods for solving mathematical problems either exactly or to prescribed accuracy. This book explains how precise numerical analysis is constructed. The book also provides exercises which illustrate points from the text and references for the methods presented. All disc-based content for this title is now available on the Web. · Clearer, simpler descriptions and explanations ofthe various numerical methods· Two new types of numerical problems; accurately solving partial differential equations with the included software and computing line integrals in the complex plane.
Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F
2018-03-26
This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.
[Precision medicine : a required approach for the general internist].
Waeber, Gérard; Cornuz, Jacques; Gaspoz, Jean-Michel; Guessous, Idris; Mooser, Vincent; Perrier, Arnaud; Simonet, Martine Louis
2017-01-18
The general internist cannot be a passive bystander of the anticipated medical revolution induced by precision medicine. This latter aims to improve the predictive and/or clinical course of an individual by integrating all biological, genetic, environmental, phenotypic and psychosocial knowledge of a person. In this article, national and international initiatives in the field of precision medicine are discussed as well as the potential financial, ethical and limitations of personalized medicine. The question is not to know if precision medicine will be part of everyday life but rather to integrate early the general internist in multidisciplinary teams to ensure optimal information and shared-decision process with patients and individuals.
Zender, Charles S.
2016-09-01
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that
Precision metrology of NSTX surfaces using coherent laser radar ranging
Kugel, H.W.; Loesser, D.; Roquemore, A. L.; Menon, M. M.; Barry, R. E.
2000-01-01
A frequency modulated Coherent Laser Radar ranging diagnostic is being used on the National Spherical Torus Experiment (NSTX) for precision metrology. The distance (range) between the 1.5 microm laser source and the target is measured by the shift in frequency of the linearly modulated beam reflected off the target. The range can be measured to a precision of < 100microm at distances of up to 22 meters. A description is given of the geometry and procedure for measuring NSTX interior and exterior surfaces during open vessel conditions, and the results of measurements are elaborated
A precision synchrotron radiation detector using phosphorescent screens
Jung, C.K.; Lateur, M.; Nash, J.; Tinsman, J.; Butler, J.; Wormser, G.
1990-01-01
A precision detector to measure synchrotron radiation beam positions has been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 μm on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. 3 refs., 5 figs., 1 tab
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
Spin and precision electroweak physics
Marciano, W.J. [Brookhaven National Lab., Upton, NY (United States)
1994-12-01
A perspective on fundamental parameters and precision tests of the Standard Model is given. Weak neutral current reactions are discussed with emphasis on those processes involving (polarized) electrons. The role of electroweak radiative corrections in determining the top quark mass and probing for {open_quotes}new physics{close_quotes} is described.
Spin and precision electroweak physics
Marciano, W.J.
1993-01-01
A perspective on fundamental parameters and precision tests of the Standard Model is given. Weak neutral current reactions are discussed with emphasis on those processes involving (polarized) electrons. The role of electroweak radiative corrections in determining the top quark mass and probing for ''new physics'' is described
Precision surveying system for PEP
Gunn, J.; Lauritzen, T.; Sah, R.; Pellisier, P.F.
1977-01-01
A semi-automatic precision surveying system is being developed for PEP. Reference elevations for vertical alignment will be provided by a liquid level. The short range surveying will be accomplished using a Laser Surveying System featuring automatic data acquisition and analysis
Precision medicine at the crossroads.
Olson, Maynard V
2017-10-11
There are bioethical, institutional, economic, legal, and cultural obstacles to creating the robust-precompetitive-data resource that will be required to advance the vision of "precision medicine," the ability to use molecular data to target therapies to patients for whom they offer the most benefit at the least risk. Creation of such an "information commons" was the central recommendation of the 2011 report Toward Precision Medicine issued by a committee of the National Research Council of the USA (Committee on a Framework for Development of a New Taxonomy of Disease; National Research Council. Toward precision medicine: building a knowledge network for biomedical research and a new taxonomy of disease. 2011). In this commentary, I review the rationale for creating an information commons and the obstacles to doing so; then, I endorse a path forward based on the dynamic consent of research subjects interacting with researchers through trusted mediators. I assert that the advantages of the proposed system overwhelm alternative ways of handling data on the phenotypes, genotypes, and environmental exposures of individual humans; hence, I argue that its creation should be the central policy objective of early efforts to make precision medicine a reality.
Proton gyromagnetic precision measurement system
Zhu Deming; Deming Zhu
1991-01-01
A computerized control and measurement system used in the proton gyromagnetic precision meausrement is descirbed. It adopts the CAMAC data acquisition equipment, using on-line control and analysis with the HP85 and PDP-11/60 computer systems. It also adopts the RSX11M computer operation system, and the control software is written in FORTRAN language
Peterson, David; Stofleth, Jerome H.; Saul, Venner W.
2017-07-11
Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.
Classifying Linear Canonical Relations
Lorand, Jonathan
2015-01-01
In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.
Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...
Abstract. In this paper, the error analysis has been done for the linear approximate transformation between two tangent planes in celestial sphere in a simple case. The results demonstrate that the error from the linear transformation does not meet the requirement of high-precision astrometry under some conditions, so the ...
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Precision Orbit Derived Atmospheric Density: Development and Performance
McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.
2012-09-01
Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer
Precise Orbit Determination of QZS-1
Hugentobler, U.; Steigenberger, P.; Rodriguez-Solano, C.; Hauschild, A.
2011-12-01
QZS-1, the first satellite of the Japanese Quasi Zenith Satellite System (QZSS) was launched in September 2010. Transmission of the standard codes started in December 2010 and the satellite was declared healthy in June 2011. Five stations of the COoperative Network for GIOVE Observation (CONGO) were upgraded to provide QZSS tracking capability. These five stations provide the basis for the precise orbit determination (POD) of the QZS-1 spacecraft. The stability and consistency of different orbital arc lengths is analyzed based on orbit fit residuals, day boundary discontinuities, and Satellite Laser Ranging residuals. As QZS-1 simultaneously transmits navigation signals on three frequencies in the L1, L2, and L5 band, different ionosphere-free linear combinations can be formed. The differences of the orbits computed from these different observables (ionosphere-free linear combination of L1/L2 and L1/L5) as well as the stability of the differential code biases estimated within the POD are studied. Finally, results of the attitude determination based on the navigation signal transmission from two different antennas onboard QZS-1 are presented.
The Precision Problem in Conservation and Restoration.
Hiers, J Kevin; Jackson, Stephen T; Hobbs, Richard J; Bernhardt, Emily S; Valentine, Leonie E
2016-11-01
Within the varied contexts of environmental policy, conservation of imperilled species populations, and restoration of damaged habitats, an emphasis on idealized optimal conditions has led to increasingly specific targets for management. Overly-precise conservation targets can reduce habitat variability at multiple scales, with unintended consequences for future ecological resilience. We describe this dilemma in the context of endangered species management, stream restoration, and climate-change adaptation. Inappropriate application of conservation targets can be expensive, with marginal conservation benefit. Reduced habitat variability can limit options for managers trying to balance competing objectives with limited resources. Conservation policies should embrace habitat variability, expand decision-space appropriately, and support adaptation to local circumstances to increase ecological resilience in a rapidly changing world. Copyright © 2016 Elsevier Ltd. All rights reserved.
The precision problem in conservation and restoration
Hiers, J. Kevin; Jackson, Stephen T.; Hobbs, Richard J.; Bernhardt, Emily S.; Valentine, Leonie E.
2016-01-01
Within the varied contexts of environmental policy, conservation of imperilled species populations, and restoration of damaged habitats, an emphasis on idealized optimal conditions has led to increasingly specific targets for management. Overly-precise conservation targets can reduce habitat variability at multiple scales, with unintended consequences for future ecological resilience. We describe this dilemma in the context of endangered species management, stream restoration, and climate-change adaptation. Inappropriate application of conservation targets can be expensive, with marginal conservation benefit. Reduced habitat variability can limit options for managers trying to balance competing objectives with limited resources. Conservation policies should embrace habitat variability, expand decision-space appropriately, and support adaptation to local circumstances to increase ecological resilience in a rapidly changing world.
Beam-based alignment technique for the SLC [Stanford Linear Collider] linac
Adolphsen, C.E.; Lavine, T.L.; Atwood, W.B.
1989-03-01
Misalignment of quadrupole magnets and beam position monitors (BPMs) in the linac of the SLAC Linear Collider (SLC) cause the electron and positron beams to be steered off-center in the disk-loaded waveguide accelerator structures. Off-center beams produce wakefields which limit the SLC performance at high beam intensities by causing emittance growth. Here, we present a general method for simultaneously determining quadrupole magnet and BPM offsets using beam trajectory measurements. Results from the application of the method to the SLC linac are described. The alignment precision achieved is approximately 100 μm, which is significantly better than that obtained using optical surveying techniques. 2 refs., 4 figs
Precision and accuracy in radiotherapy
Brenner, J.D.
1989-01-01
The required precision due to random errors in the delivery of fractionated dose regime is considered. It is argued that suggestions that 1-3% precision is needed may be unnecessarily conservative. It is further suggested that random and systematic errors should not be combined with equal weight to yield an overall target uncertainty in dose delivery, systematic errors being of greater significance. The authors conclude that imprecise dose delivery and inaccurate dose delivery affect patient-cure results differently. Whereas, for example, a 10% inaccuracy in dose delivery would be quite catastrophic in the case considered here, a corresponding imprecision would have a much smaller effect on overall success rates. (author). 14 refs.; 2 figs
Precision electroweak physics at LEP
Mannelli, M.
1994-12-01
Copious event statistics, a precise understanding of the LEP energy scale, and a favorable experimental situation at the Z{sup 0} resonance have allowed the LEP experiments to provide both dramatic confirmation of the Standard Model of strong and electroweak interactions and to place substantially improved constraints on the parameters of the model. The author concentrates on those measurements relevant to the electroweak sector. It will be seen that the precision of these measurements probes sensitively the structure of the Standard Model at the one-loop level, where the calculation of the observables measured at LEP is affected by the value chosen for the top quark mass. One finds that the LEP measurements are consistent with the Standard Model, but only if the mass of the top quark is measured to be within a restricted range of about 20 GeV.
Precise object tracking under deformation
Saad, M.H
2010-01-01
The precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This frame-work focuses on the precise object tracking under deformation such as scaling , rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high precession results.
Fit to Electroweak Precision Data
Erler, Jens
2006-01-01
A brief review of electroweak precision data from LEP, SLC, the Tevatron, and low energies is presented. The global fit to all data including the most recent results on the masses of the top quark and the W boson reinforces the preference for a relatively light Higgs boson. I will also give an outlook on future developments at the Tevatron Run II, CEBAF, the LHC, and the ILC
Precise Object Tracking under Deformation
Saad, M.H.
2010-01-01
The precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This framework focuses on the precise object tracking under deformation such as scaling, rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high precession results. xiiiThe precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This framework focuses on the precise object tracking under deformation such as scaling, rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high
Precision measurements of electroweak parameters
Savin, Alexander
2017-01-01
A set of selected precise measurements of the SM parameters from the LHC experiments is discussed. Results on W-mass measurement and forward-backward asymmetry in production of the Drell--Yan events in both dielectron and dimuon decay channels are presented together with results on the effective mixing angle measurements. Electroweak production of the vector bosons in association with two jets is discussed.
Precision titration mini-calorimeter
Ensor, D.; Kullberg, L.; Choppin, G.
1977-01-01
The design and test of a small volume calorimeter of high precision and simple design is described. The calorimeter operates with solution sample volumes in the range of 3 to 5 ml. The results of experiments on the entropy changes for two standard reactions: (1) reaction of tris(hydroxymethyl)aminomethane with hydrochloric acid and (2) reaction between mercury(II) and bromide ions are reported to confirm the accuracy and overall performance of the calorimeter
Knowledge of Precision Farming Beneficiaries
A.V. Greena
2016-05-01
Full Text Available Precision Farming is one of the many advanced farming practices that make production more efficient by better resource management and reducing wastage. TN-IAMWARM is a world bank funded project aims to improve the farm productivity and income through better water management. The present study was carried out in Kambainallur sub basin of Dharmapuri district with 120 TN-IAMWARM beneficiaries as respondents. The result indicated that more than three fourth (76.67 % of the respondents had high level of knowledge on precision farming technologies which was made possible by the implementation of TN-IAMWARM project. The study further revealed that educational status, occupational status and exposure to agricultural messages had a positive and significant contribution to the knowledge level of the respondents at 0.01 level of probability whereas experience in precision farming and social participation had a positive and significant contribution at 0.05 level of probability.
Non linear system become linear system
Petre Bucur
2007-01-01
Full Text Available The present paper refers to the theory and the practice of the systems regarding non-linear systems and their applications. We aimed the integration of these systems to elaborate their response as well as to highlight some outstanding features.
Linear motor coil assembly and linear motor
2009-01-01
An ironless linear motor (5) comprising a magnet track (53) and a coil assembly (50) operating in cooperation with said magnet track (53) and having a plurality of concentrated multi-turn coils (31 a-f, 41 a-d, 51 a-k), wherein the end windings (31E) of the coils (31 a-f, 41 a-e) are substantially
Precision polymers and 3D DNA nanostructures: emergent assemblies from new parameter space.
Serpell, Christopher J; Edwardson, Thomas G W; Chidchob, Pongphak; Carneiro, Karina M M; Sleiman, Hanadi F
2014-11-05
Polymer self-assembly and DNA nanotechnology have both proved to be powerful nanoscale techniques. To date, most attempts to merge the fields have been limited to placing linear DNA segments within a polydisperse block copolymer. Here we show that, by using hydrophobic polymers of a precisely predetermined length conjugated to DNA strands, and addressable 3D DNA prisms, we are able to effect the formation of unprecedented monodisperse quantized superstructures. The structure and properties of larger micelles-of-prisms were probed in depth, revealing their ability to participate in controlled release of their constituent nanostructures, and template light-harvesting energy transfer cascades, mediated through both the addressability of DNA and the controlled aggregation of the polymers.
Gunnels, John; Lee, Jon; Margulies, Susan
2010-01-01
We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.
Gunnels, John
2010-06-01
We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.
Precise measurement of the $K^{\\pm} \\to \\pi^{\\pm}e^{+}e^{−}$ decay
Batley, J.R.; Kalmus, G.; Lazzeroni, C.; Munday, D.J.; Slater, M.W.; Wotton, S.A.; Arcidiacono, R.; Bocquet, G.; Cabibbo, N.; Ceccucci, A.; Cundy, D.; Falaleev, V.; Fidecaro, M.; Gatignon, L.; Gonidec, A.; Kubischta, W.; Norton, A.; Maier, A.; Patel, M.; Peters, A.; Balev, S.; Frabetti, P.L.; Goudzovski, E.; Hristov, P.; Kekelidze, V.; Kozhuharov, V.; Litov, L.; Madigozhin, D.; Marinova, E.; Molokanova, N.; Polenkevich, I.; Potrebenikov, Yu.; Stoynev, S.; Zinchenko, A.; Monnier, E.; Swallow, E.; Winston, R.; Rubin, P.; Walker, A.; Baldini, W.; Cotta Ramusino, A.; Dalpiaz, P.; Damiani, C.; Fiorini, M.; Gianoli, A.; Martini, M.; Petrucci, F.; Savrie, M.; Scarpa, M.; Wahl, H.; Bizzeti, A.; Calvetti, M.; Celeghini, E.; Iacopini, E.; Lenti, M.; Martelli, F.; Ruggiero, G.; Veltri, M.; Behler, M.; Eppard, K.; Kleinknecht, K.; Marouelli, P.; Masetti, L.; Moosbrugger, U.; Morales Morales, C.; Renk, B.; Wache, M.; Wanke, R.; Winhart, A.; Coward, D.; Dabrowski, A.; Fonseca Martin, T.; Shieh, M.; Szleper, M.; Velasco, M.; Wood, M.D.; Anzivino, G.; Cenci, P.; Imbergamo, E.; Nappi, A.; Pepe, M.; Petrucci, M.C.; Piccini, M.; Raggi, M.; Valdata-Nappi, M.; Cerri, C.; Fantechi, R.; Collazuol, G.; DiLella, L.; Lamanna, G.; Mannelli, I.; Michetti, A.; Costantini, F.; Doble, N.; Fiorini, L.; Giudici, S.; Pierazzini, G.; Sozzi, M.; Venditti, S.; Bloch-Devaux, B.; Cheshkov, C.; Cheze, J.B.; De Beer, M.; Derre, J.; Marel, G.; Mazzucato, E.; Peyaud, B.; Vallage, B.; Holder, M.; Ziolkowski, M.; Bifani, S.; Biino, C.; Cartiglia, N.; Clemencic, M.; Goy Lopez, S.; Marchetto, F.; Dibon, H.; Jeitler, M.; Markytan, M.; Mikulec, I.; Neuhofer, G.; Widhalm, L.
2009-01-01
A sample of 7253 $K^\\pm\\to\\pi^\\pm e^+e^-(\\gamma)$ decay candidates with 1.0% background contamination has been collected by the NA48/2 experiment at the CERN SPS, allowing a precise measurement of the decay properties. The branching ratio in the full kinematic range was measured to be ${\\rm BR}=(3.11\\pm0.12)\\times 10^{-7}$, where the uncertainty includes also the model dependence. The shape of the form factor $W(z)$, where $z=(M_{ee}/M_K)^2$, was parameterized according to several models, and, in particular, the slope $\\delta$ of the linear form factor $W(z)=W_0(1+\\delta z)$ was determined to be $\\delta=2.32\\pm0.18$. A possible CP violating asymmetry of $K^+$ and $K^-$ decay widths was investigated, and a conservative upper limit of $2.1\\times 10^{-2}$ at 90% CL was established.
On Associative Conformal Algebras of Linear Growth
Retakh, Alexander
2000-01-01
Lie conformal algebras appear in the theory of vertex algebras. Their relation is similar to that of Lie algebras and their universal enveloping algebras. Associative conformal algebras play a role in conformal representation theory. We introduce the notions of conformal identity and unital associative conformal algebras and classify finitely generated simple unital associative conformal algebras of linear growth. These are precisely the complete algebras of conformal endomorphisms of finite ...
Magnetic resonance imaging for precise radiotherapy of small laboratory animals
Frenzel, Thorsten [Universitaetsklinikum Hamburg-Eppendorf, Hamburg (Germany). Bereich Strahlentherapie; Universitaetsklinikum Hamburg-Eppendorf, Hamburg (Germany). Inst. fuer Anatomie und Experimentelle Morphologie; Kaul, Michael Gerhard; Ernst, Thomas Michael; Salamon, Johannes [Universitaetsklinikum Hamburg-Eppendorf, Hamburg (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Jaeckel, Maria [Universitaetsklinikum Hamburg-Eppendorf, Hamburg (Germany). Klinik und Poliklinik fuer Strahlentherapie und Radioonkologie; Schumacher, Udo [Universitaetsklinikum Hamburg-Eppendorf, Hamburg (Germany). Inst. fuer Anatomie und Experimentelle Morphologie; Kruell, Andreas [Universitaetsklinikum Hamburg-Eppendorf, Hamburg (Germany). Bereich Strahlentherapie
2017-05-01
Radiotherapy of small laboratory animals (SLA) is often not as precisely applied as in humans. Here we describe the use of a dedicated SLA magnetic resonance imaging (MRI) scanner for precise tumor volumetry, radiotherapy treatment planning, and diagnostic imaging in order to make the experiments more accurate. Different human cancer cells were injected at the lower trunk of pfp/rag2 and SCID mice to allow for local tumor growth. Data from cross sectional MRI scans were transferred to a clinical treatment planning system (TPS) for humans. Manual palpation of the tumor size was compared with calculated tumor size of the TPS and with tumor weight at necropsy. As a feasibility study MRI based treatment plans were calculated for a clinical 6 MV linear accelerator using a micro multileaf collimator (μMLC). In addition, diagnostic MRI scans were used to investigate animals which did clinical poorly during the study. MRI is superior in precise tumor volume definition whereas manual palpation underestimates their size. Cross sectional MRI allow for treatment planning so that conformal irradiation of mice with a clinical linear accelerator using a μMLC is in principle feasible. Several internal pathologies were detected during the experiment using the dedicated scanner. MRI is a key technology for precise radiotherapy of SLA. The scanning protocols provided are suited for tumor volumetry, treatment planning, and diagnostic imaging.
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
Blyth, T S
2002-01-01
Basic Linear Algebra is a text for first year students leading from concrete examples to abstract theorems, via tutorial-type exercises. More exercises (of the kind a student may expect in examination papers) are grouped at the end of each section. The book covers the most important basics of any first course on linear algebra, explaining the algebra of matrices with applications to analytic geometry, systems of linear equations, difference equations and complex numbers. Linear equations are treated via Hermite normal forms which provides a successful and concrete explanation of the notion of linear independence. Another important highlight is the connection between linear mappings and matrices leading to the change of basis theorem which opens the door to the notion of similarity. This new and revised edition features additional exercises and coverage of Cramer's rule (omitted from the first edition). However, it is the new, extra chapter on computer assistance that will be of particular interest to readers:...
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center
Dongarra, Jack; Ltaief, Hatem; Luszczek, Piotr R.; Weaver, Vincent M.
2012-01-01
We propose to study the impact on the energy footprint of two advanced algorithmic strategies in the context of high performance dense linear algebra libraries: (1) mixed precision algorithms with iterative refinement allow to run at the peak performance of single precision floating-point arithmetic while achieving double precision accuracy and (2) tree reduction technique exposes more parallelism when factorizing tall and skinny matrices for solving over determined systems of linear equations or calculating the singular value decomposition. Integrated within the PLASMA library using tile algorithms, which will eventually supersede the block algorithms from LAPACK, both strategies further excel in performance in the presence of a dynamic task scheduler while targeting multicore architecture. Energy consumption measurements are reported along with parallel performance numbers on a dual-socket quad-core Intel Xeon as well as a quad-socket quad-core Intel Sandy Bridge chip, both providing component-based energy monitoring at all levels of the system, through the Power Pack framework and the Running Average Power Limit model, respectively. © 2012 IEEE.
Dongarra, Jack
2012-11-01
We propose to study the impact on the energy footprint of two advanced algorithmic strategies in the context of high performance dense linear algebra libraries: (1) mixed precision algorithms with iterative refinement allow to run at the peak performance of single precision floating-point arithmetic while achieving double precision accuracy and (2) tree reduction technique exposes more parallelism when factorizing tall and skinny matrices for solving over determined systems of linear equations or calculating the singular value decomposition. Integrated within the PLASMA library using tile algorithms, which will eventually supersede the block algorithms from LAPACK, both strategies further excel in performance in the presence of a dynamic task scheduler while targeting multicore architecture. Energy consumption measurements are reported along with parallel performance numbers on a dual-socket quad-core Intel Xeon as well as a quad-socket quad-core Intel Sandy Bridge chip, both providing component-based energy monitoring at all levels of the system, through the Power Pack framework and the Running Average Power Limit model, respectively. © 2012 IEEE.
Comparing linear probability model coefficients across groups
Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt
2015-01-01
of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....
Matrices and linear transformations
Cullen, Charles G
1990-01-01
""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first
Efficient Non Linear Loudspeakers
Petersen, Bo R.; Agerkvist, Finn T.
2006-01-01
Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....
Core seismic behaviour: linear and non-linear models
Bernard, M.; Van Dorsselaere, M.; Gauvain, M.; Jenapierre-Gantenbein, M.
1981-08-01
The usual methodology for the core seismic behaviour analysis leads to a double complementary approach: to define a core model to be included in the reactor-block seismic response analysis, simple enough but representative of basic movements (diagrid or slab), to define a finer core model, with basic data issued from the first model. This paper presents the history of the different models of both kinds. The inert mass model (IMM) yielded a first rough diagrid movement. The direct linear model (DLM), without shocks and with sodium as an added mass, let to two different ones: DLM 1 with independent movements of the fuel and radial blanket subassemblies, and DLM 2 with a core combined movement. The non-linear (NLM) ''CORALIE'' uses the same basic modelization (Finite Element Beams) but accounts for shocks. It studies the response of a diameter on flats and takes into account the fluid coupling and the wrapper tube flexibility at the pad level. Damping consists of one modal part of 2% and one part due to shocks. Finally, ''CORALIE'' yields the time-history of the displacements and efforts on the supports, but damping (probably greater than 2%) and fluid-structures interaction are still to be precised. The validation experiments were performed on a RAPSODIE core mock-up on scale 1, in similitude of 1/3 as to SPX 1. The equivalent linear model (ELM) was developed for the SPX 1 reactor-block response analysis and a specified seismic level (SB or SM). It is composed of several oscillators fixed to the diagrid and yields the same maximum displacements and efforts than the NLM. The SPX 1 core seismic analysis with a diagrid input spectrum which corresponds to a 0,1 g group acceleration, has been carried out with these models: some aspects of these calculations are presented here
Faraway, Julian J
2014-01-01
A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz
Carr, Joseph
1996-01-01
The linear IC market is large and growing, as is the demand for well trained technicians and engineers who understand how these devices work and how to apply them. Linear Integrated Circuits provides in-depth coverage of the devices and their operation, but not at the expense of practical applications in which linear devices figure prominently. This book is written for a wide readership from FE and first degree students, to hobbyists and professionals.Chapter 1 offers a general introduction that will provide students with the foundations of linear IC technology. From chapter 2 onwa
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Superconducting linear accelerator cryostat
Ben-Zvi, I.; Elkonin, B.V.; Sokolowski, J.S.
1984-01-01
A large vertical cryostat for a superconducting linear accelerator using quarter wave resonators has been developed. The essential technical details, operational experience and performance are described. (author)
Quantum algorithm for linear regression
Wang, Guoming
2017-07-01
We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.
Khazan A.
2011-01-01
Full Text Available In the earlier study (Khazan A. Upper Limit in Mendeleev’s Periodic Table — Ele- ment No. 155. 2nd ed., Svenska fysikarkivet, Stockholm, 2010 the author showed how Rhodium can be applied to the hyperbolic law of the Periodic Table of Elements in or- der to calculate, with high precision, all other elements conceivable in the Table. Here we obtain the same result, with use of fraction linear functions (adjacent hyperbolas.
Khazan A.
2011-01-01
Full Text Available In the earlier study (Khazan A. Upper Limit in Mendeleev's Periodic Table - Element No.155. 2nd ed., Svenska fysikarkivet, Stockholm, 2010 the author showed how Rhodium can be applied to the hyperbolic law of the Periodic Table of Elements in order to calculate, with high precision, all other elements conceivable in the Table. Here we obtain the same result, with use of fraction linear functions (adjacent hyperbolas.
High precision innovative micropump for artificial pancreas
Chappel, E.; Mefti, S.; Lettieri, G.-L.; Proennecke, S.; Conan, C.
2014-03-01
The concept of artificial pancreas, which comprises an insulin pump, a continuous glucose meter and a control algorithm, is a major step forward in managing patient with type 1 diabetes mellitus. The stability of the control algorithm is based on short-term precision micropump to deliver rapid-acting insulin and to specific integrated sensors able to monitor any failure leading to a loss of accuracy. Debiotech's MEMS micropump, based on the membrane pump principle, is made of a stack of 3 silicon wafers. The pumping chamber comprises a pillar check-valve at the inlet, a pumping membrane which is actuated against stop limiters by a piezo cantilever, an anti-free-flow outlet valve and a pressure sensor. The micropump inlet is tightly connected to the insulin reservoir while the outlet is in direct communication with the patient skin via a cannula. To meet the requirement of a pump dedicated to closed-loop application for diabetes care, in addition to the well-controlled displacement of the pumping membrane, the high precision of the micropump is based on specific actuation profiles that balance effect of pump elasticity in low-consumption push-pull mode.
Patten, B.C.
1983-04-01
Two issues concerning linearity or nonlinearity of natural systems are considered. Each is related to one of the two alternative defining properties of linear systems, superposition and decomposition. Superposition exists when a linear combination of inputs to a system results in the same linear combination of outputs that individually correspond to the original inputs. To demonstrate this property it is necessary that all initial states and inputs of the system which impinge on the output in question be included in the linear combination manipulation. As this is difficult or impossible to do with real systems of any complexity, nature appears nonlinear even though it may be linear. A linear system that displays nonlinear behavior for this reason is termed pseudononlinear. The decomposition property exists when the dynamic response of a system can be partitioned into an input-free portion due to state plus a state-free portion due to input. This is a characteristic of all linear systems, but not of nonlinear systems. Without the decomposition property, it is not possible to distinguish which portions of a system's behavior are due to innate characteristics (self) vs. outside conditions (environment), which is an important class of questions in biology and ecology. Some philosophical aspects of these findings are then considered. It is suggested that those ecologists who hold to the view that organisms and their environments are separate entities are in effect embracing a linear view of nature, even though their belief systems and mathematical models tend to be nonlinear. On the other hand, those who consider that organism-environment complex forms a single inseparable unit are implictly involved in non-linear thought, which may be in conflict with the linear modes and models that some of them use. The need to rectify these ambivalences on the part of both groups is indicated.
Constraining supersymmetry with precision data
Pierce, D.M.; Erler, J.
1997-01-01
We discuss the results of a global fit to precision data in supersymmetric models. We consider both gravity- and gauge-mediated models. As the superpartner spectrum becomes light, the global fit to the data typically results in larger values of χ 2 . We indicate the regions of parameter space which are excluded by the data. We discuss the additional effect of the B(B→X s γ) measurement. Our analysis excludes chargino masses below M Z in the simplest gauge-mediated model with μ>0, with stronger constraints for larger values of tanβ. copyright 1997 American Institute of Physics
High precision Standard Model Physics
Magnin, J.
2009-01-01
The main goal of the LHCb experiment, one of the four large experiments of the Large Hadron Collider, is to try to give answers to the question of why Nature prefers matter over antimatter? This will be done by studying the decay of b quarks and their antimatter partners, b-bar, which will be produced by billions in 14 TeV p-p collisions by the LHC. In addition, as 'beauty' particles mainly decay in charm particles, an interesting program of charm physics will be carried on, allowing to measure quantities as for instance the D 0 -D-bar 0 mixing, with incredible precision.
Electroweak precision measurements in CMS
Dordevic, Milos
2017-01-01
An overview of recent results on electroweak precision measurements from the CMS Collaboration is presented. Studies of the weak boson differential transverse momentum spectra, Z boson angular coefficients, forward-backward asymmetry of Drell-Yan lepton pairs and charge asymmetry of W boson production are made in comparison to the state-of-the-art Monte Carlo generators and theoretical predictions. The results show a good agreement with the Standard Model. As a proof of principle for future W mass measurements, a W-like analysis of the Z boson mass is performed.
Precision proton spectrometers for CMS
Albrow, Michael
2013-01-01
We plan to add high precision tracking- and timing-detectors at z = +/- 240 m to CMS to study exclusive processes p + p -- p + X + p at high luminosity. This enables the LHC to be used as a tagged photon-photon collider, with X = l+l- and W+W-, and as a "tagged" gluon-gluon collider (with a spectator gluon) for QCD studies with jets. A second stage at z = 240 m would allow observations of exclusive Higgs boson production.
Precise Analysis of String Expressions
Christensen, Aske Simon; Møller, Anders; Schwartzbach, Michael Ignatieff
2003-01-01
We perform static analysis of Java programs to answer a simple question: which values may occur as results of string expressions? The answers are summarized for each expression by a regular language that is guaranteed to contain all possible values. We present several applications of this analysis...... are automatically produced. We present extensive benchmarks demonstrating that the analysis is efficient and produces results of useful precision......., including statically checking the syntax of dynamically generated expressions, such as SQL queries. Our analysis constructs flow graphs from class files and generates a context-free grammar with a nonterminal for each string expression. The language of this grammar is then widened into a regular language...
Andy eClark
2013-05-01
Full Text Available An appreciation of the many roles of ‘precision-weighting’ (upping the gain on select populations of prediction error units opens the door to better accounts of planning and ‘offline simulation’, makes suggestive contact with large bodies of work on embodied and situated cognition, and offers new perspectives on the ‘active brain’. Combined with the complex affordances of language and culture, and operating against the essential backdrop of a variety of more biologically basic ploys and stratagems, the result is a maximally context-sensitive, restless, constantly self-reconfiguring architecture.
Thin films for precision optics
Araujo, J.F.; Maurici, N.; Castro, J.C. de
1983-01-01
The technology of producing dielectric and/or metallic thin films for high precision optical components is discussed. Computer programs were developed in order to calculate and register, graphically, reflectance and transmittance spectra of multi-layer films. The technology of vacuum evaporation of several materials was implemented in our thin-films laboratory; various films for optics were then developed. The possibility of first calculate film characteristics and then produce the film is of great advantage since it reduces the time required to produce a new type of film and also reduces the cost of the project. (C.L.B.) [pt
Linear colliders - prospects 1985
Rees, J.
1985-06-01
We discuss the scaling laws of linear colliders and their consequences for accelerator design. We then report on the SLAC Linear Collider project and comment on experience gained on that project and its application to future colliders. 9 refs., 2 figs
Richter, B.
1985-01-01
A report is given on the goals and progress of the SLAC Linear Collider. The author discusses the status of the machine and the detectors and give an overview of the physics which can be done at this new facility. He also gives some ideas on how (and why) large linear colliders of the future should be built
Rogner, H.H.
1989-01-01
The submitted sections on linear programming are extracted from 'Theorie und Technik der Planung' (1978) by W. Blaas and P. Henseler and reformulated for presentation at the Workshop. They consider a brief introduction to the theory of linear programming and to some essential aspects of the SIMPLEX solution algorithm for the purposes of economic planning processes. 1 fig
Rowe, C.H.; Wilton, M.S. de.
1979-01-01
An improved recirculating electron beam linear accelerator of the racetrack type is described. The system comprises a beam path of four straight legs with four Pretzel bending magnets at the end of each leg to direct the beam into the next leg of the beam path. At least one of the beam path legs includes a linear accelerator. (UK)
Precision and reproducibility in AMS radiocarbon measurements.
Hotchkis, M A; Fink, D; Hua, Q; Jacobsen, G E; Lawson, E M; Smith, A M; Tuniz, C [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)
1997-12-31
Accelerator Mass Spectrometry (AMS) is a technique by which rare radioisotopes such as {sup 14}C can be measured at environmental levels with high efficiency. Instead of detecting radioactivity, which is very weak for long-lived environmental radioisotopes, atoms are counted directly. The sample is placed in an ion source, from which a negative ion beam of the atoms of interest is extracted, mass analysed, and injected into a tandem accelerator. After stripping to positive charge states in the accelerator HV terminal, the ions are further accelerated, analysed with magnetic and electrostatic devices and counted in a detector. An isotopic ratio is derived from the number of radioisotope atoms counted in a given time and the beam current of a stable isotope of the same element, measured after the accelerator. For radiocarbon, {sup 14}C/{sup 13}C ratios are usually measured, and the ratio of an unknown sample is compared to that of a standard. The achievable precision for such ratio measurements is limited primarily by {sup 14}C counting statistics and also by a variety of factors related to accelerator and ion source stability. At the ANTARES AMS facility at Lucas Heights Research Laboratories we are currently able to measure {sup 14}C with 0.5% precision. In the two years since becoming operational, more than 1000 {sup 14}C samples have been measured. Recent improvements in precision for {sup 14}C have been achieved with the commissioning of a 59 sample ion source. The measurement system, from sample changing to data acquisition, is under common computer control. These developments have allowed a new regime of automated multi-sample processing which has impacted both on the system throughput and the measurement precision. We have developed data evaluation methods at ANTARES which cross-check the self-consistency of the statistical analysis of our data. Rigorous data evaluation is invaluable in assessing the true reproducibility of the measurement system and aids in
Precision and reproducibility in AMS radiocarbon measurements.
Hotchkis, M.A.; Fink, D.; Hua, Q.; Jacobsen, G.E.; Lawson, E. M.; Smith, A.M.; Tuniz, C. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)
1996-12-31
Accelerator Mass Spectrometry (AMS) is a technique by which rare radioisotopes such as {sup 14}C can be measured at environmental levels with high efficiency. Instead of detecting radioactivity, which is very weak for long-lived environmental radioisotopes, atoms are counted directly. The sample is placed in an ion source, from which a negative ion beam of the atoms of interest is extracted, mass analysed, and injected into a tandem accelerator. After stripping to positive charge states in the accelerator HV terminal, the ions are further accelerated, analysed with magnetic and electrostatic devices and counted in a detector. An isotopic ratio is derived from the number of radioisotope atoms counted in a given time and the beam current of a stable isotope of the same element, measured after the accelerator. For radiocarbon, {sup 14}C/{sup 13}C ratios are usually measured, and the ratio of an unknown sample is compared to that of a standard. The achievable precision for such ratio measurements is limited primarily by {sup 14}C counting statistics and also by a variety of factors related to accelerator and ion source stability. At the ANTARES AMS facility at Lucas Heights Research Laboratories we are currently able to measure {sup 14}C with 0.5% precision. In the two years since becoming operational, more than 1000 {sup 14}C samples have been measured. Recent improvements in precision for {sup 14}C have been achieved with the commissioning of a 59 sample ion source. The measurement system, from sample changing to data acquisition, is under common computer control. These developments have allowed a new regime of automated multi-sample processing which has impacted both on the system throughput and the measurement precision. We have developed data evaluation methods at ANTARES which cross-check the self-consistency of the statistical analysis of our data. Rigorous data evaluation is invaluable in assessing the true reproducibility of the measurement system and aids in
Loescher, D.H. [Sandia National Labs., Albuquerque, NM (United States). Systems Surety Assessment Dept.; Noren, K. [Univ. of Idaho, Moscow, ID (United States). Dept. of Electrical Engineering
1996-09-01
The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.
Micropropulsion Systems for Precision Controlled Space Flight
Larsen, Jack
. This project is thus concentrating on developing a method by which an entire, ecient, control system compensating for the disturbances from the space environment and thereby enabling precision formation flight can be realized. The space environment is initially studied and the knowledge gained is used......Space science is subject to a constantly increasing demand for larger coherence lengths or apertures of the space observation systems, which in turn translates into a demand for increased dimensions and subsequently cost and complexity of the systems. When this increasing demand reaches...... the pratical limitations of increasing the physical dimensions of the spacecrafts, the observation platforms will have to be distributed on more spacecrafts flying in very accurate formations. Consequently, the observation platform becomes much more sensitive to disturbances from the space environment...
Precision Continuum Receivers for Astrophysical Applications
Wollack, Edward J.
2011-01-01
Cryogenically cooled HEMT (High Electron Mobility Transistor) amplifiers find widespread use in radioastronomy receivers. In recent years, these devices have also been commonly employed in broadband receivers for precision measurements of the Cosmic Microwave Background (CMB) radiation. In this setting, the combination of ultra-low-noise and low-spectral-resolution observations reinforce the importance achieving suitable control over the device environment to achieve fundamentally limited receiver performance. The influence of the intrinsic amplifier stability at low frequencies on data quality (e.g., achievable noise and residual temporal correlations), observational and calibration strategies, as well as architectural mitigation approaches in this setting will be discussed. The implications of device level 1/f fluctuations reported in the literature on system performance will be reviewed.
Semidefinite linear complementarity problems
Eckhardt, U.
1978-04-01
Semidefinite linear complementarity problems arise by discretization of variational inequalities describing e.g. elastic contact problems, free boundary value problems etc. In the present paper linear complementarity problems are introduced and the theory as well as the numerical treatment of them are described. In the special case of semidefinite linear complementarity problems a numerical method is presented which combines the advantages of elimination and iteration methods without suffering from their drawbacks. This new method has very attractive properties since it has a high degree of invariance with respect to the representation of the set of all feasible solutions of a linear complementarity problem by linear inequalities. By means of some practical applications the properties of the new method are demonstrated. (orig.) [de
Axler, Sheldon
2015-01-01
This best-selling textbook for a second course in linear algebra is aimed at undergrad math majors and graduate students. The novel approach taken here banishes determinants to the end of the book. The text focuses on the central goal of linear algebra: understanding the structure of linear operators on finite-dimensional vector spaces. The author has taken unusual care to motivate concepts and to simplify proofs. A variety of interesting exercises in each chapter helps students understand and manipulate the objects of linear algebra. The third edition contains major improvements and revisions throughout the book. More than 300 new exercises have been added since the previous edition. Many new examples have been added to illustrate the key ideas of linear algebra. New topics covered in the book include product spaces, quotient spaces, and dual spaces. Beautiful new formatting creates pages with an unusually pleasant appearance in both print and electronic versions. No prerequisites are assumed other than the ...
Gaseous tracking at linear hadron collider: Pushing the limits
measures the energy of the particles unmeasured by the previous layer. The size of a ... electrons. The drifting positive ions produce the major component of the signal. .... Consequences of aging are worsening of the energy resolution, loss of gain ...
Linearization Technologies for Broadband Radio-Over-Fiber Transmission Systems
Xiupu Zhang
2014-11-01
Full Text Available Linearization technologies that can be used for linearizing RoF transmission are reviewed. Three main linearization methods, i.e. electrical analog linearization, optical linearization, and electrical digital linearization are presented and compared. Analog linearization can be achieved using analog predistortion circuits, and can be used for suppression of odd order nonlinear distortion components, such as third and fifth order. Optical linearization includes mixed-polarization, dual-wavelength, optical channelization and the others, implemented in optical domain, to suppress both even and odd order nonlinear distortion components, such as second and third order. Digital predistortion has been a widely used linearization method for RF power amplifiers. However, digital linearization that requires analog to digital converter is severely limited to hundreds of MHz bandwidth. Instead, analog and optical linearization provide broadband linearization with up to tens of GHz. Therefore, for broadband radio over fiber transmission that can be used for future broadband cloud radio access networks, analog and optical linearization are more appropriate than digital linearization. Generally speaking, both analog and optical linearization are able to improve spur-free dynamic range greater than 10 dB over tens of GHz. In order for current digital linearization to be used for broadband radio over fiber transmission, the reduced linearization complexity and increased linearization bandwidth are required. Moreover, some digital linearization methods in which the complexity can be reduced, such as Hammerstein type, may be more promising and require further investigation.
Precision luminosity measurements at LHCb
Aaij, Roel; Adinolfi, Marco; Affolder, Anthony; Ajaltouni, Ziad; Akar, Simon; Albrecht, Johannes; Alessio, Federico; Alexander, Michael; Ali, Suvayu; Alkhazov, Georgy; Alvarez Cartelle, Paula; Alves Jr, Antonio Augusto; Amato, Sandra; Amerio, Silvia; Amhis, Yasmine; An, Liupan; Anderlini, Lucio; Anderson, Jonathan; Andreassen, Rolf; Andreotti, Mirco; Andrews, Jason; Appleby, Robert; Aquines Gutierrez, Osvaldo; Archilli, Flavio; Artamonov, Alexander; Artuso, Marina; Aslanides, Elie; Auriemma, Giulio; Baalouch, Marouen; Bachmann, Sebastian; Back, John; Badalov, Alexey; Baesso, Clarissa; Baldini, Wander; Barlow, Roger; Barschel, Colin; Barsuk, Sergey; Barter, William; Batozskaya, Varvara; Battista, Vincenzo; Bay, Aurelio; Beaucourt, Leo; Beddow, John; Bedeschi, Franco; Bediaga, Ignacio; Belogurov, Sergey; Belous, Konstantin; Belyaev, Ivan; Ben-Haim, Eli; Bencivenni, Giovanni; Benson, Sean; Benton, Jack; Berezhnoy, Alexander; Bernet, Roland; Bettler, Marc-Olivier; van Beuzekom, Martinus; Bien, Alexander; Bifani, Simone; Bird, Thomas; Bizzeti, Andrea; Bjørnstad, Pål Marius; Blake, Thomas; Blanc, Frédéric; Blouw, Johan; Blusk, Steven; Bocci, Valerio; Bondar, Alexander; Bondar, Nikolay; Bonivento, Walter; Borghi, Silvia; Borgia, Alessandra; Borsato, Martino; Bowcock, Themistocles; Bowen, Espen Eie; Bozzi, Concezio; Brambach, Tobias; Bressieux, Joël; Brett, David; Britsch, Markward; Britton, Thomas; Brodzicka, Jolanta; Brook, Nicholas; Brown, Henry; Bursche, Albert; Buytaert, Jan; Cadeddu, Sandro; Calabrese, Roberto; Calvi, Marta; Calvo Gomez, Miriam; Campana, Pierluigi; Campora Perez, Daniel; Carbone, Angelo; Carboni, Giovanni; Cardinale, Roberta; Cardini, Alessandro; Carson, Laurence; Carvalho Akiba, Kazuyoshi; Casse, Gianluigi; Cassina, Lorenzo; Castillo Garcia, Lucia; Cattaneo, Marco; Cauet, Christophe; Cenci, Riccardo; Charles, Matthew; Charpentier, Philippe; Chefdeville, Maximilien; Chen, Shanzhen; Cheung, Shu-Faye; Chiapolini, Nicola; Chrzaszcz, Marcin; Ciba, Krzystof; Cid Vidal, Xabier; Ciezarek, Gregory; Clarke, Peter; Clemencic, Marco; Cliff, Harry; Closier, Joel; Coco, Victor; Cogan, Julien; Cogneras, Eric; Cojocariu, Lucian; Collazuol, Gianmaria; Collins, Paula; Comerma-Montells, Albert; Contu, Andrea; Cook, Andrew; Coombes, Matthew; Coquereau, Samuel; Corti, Gloria; Corvo, Marco; Counts, Ian; Couturier, Benjamin; Cowan, Greig; Craik, Daniel Charles; Cruz Torres, Melissa Maria; Cunliffe, Samuel; Currie, Robert; D'Ambrosio, Carmelo; Dalseno, Jeremy; David, Pascal; David, Pieter; Davis, Adam; De Bruyn, Kristof; De Capua, Stefano; De Cian, Michel; De Miranda, Jussara; De Paula, Leandro; De Silva, Weeraddana; De Simone, Patrizia; Dean, Cameron Thomas; Decamp, Daniel; Deckenhoff, Mirko; Del Buono, Luigi; Déléage, Nicolas; Derkach, Denis; Deschamps, Olivier; Dettori, Francesco; Di Canto, Angelo; Dijkstra, Hans; Donleavy, Stephanie; Dordei, Francesca; Dorigo, Mirco; Dosil Suárez, Alvaro; Dossett, David; Dovbnya, Anatoliy; Dreimanis, Karlis; Dujany, Giulio; Dupertuis, Frederic; Durante, Paolo; Dzhelyadin, Rustem; Dziurda, Agnieszka; Dzyuba, Alexey; Easo, Sajan; Egede, Ulrik; Egorychev, Victor; Eidelman, Semen; Eisenhardt, Stephan; Eitschberger, Ulrich; Ekelhof, Robert; Eklund, Lars; El Rifai, Ibrahim; Elsasser, Christian; Ely, Scott; Esen, Sevda; Evans, Hannah Mary; Evans, Timothy; Falabella, Antonio; Färber, Christian; Farinelli, Chiara; Farley, Nathanael; Farry, Stephen; Fay, Robert; Ferguson, Dianne; Fernandez Albor, Victor; Ferreira Rodrigues, Fernando; Ferro-Luzzi, Massimiliano; Filippov, Sergey; Fiore, Marco; Fiorini, Massimiliano; Firlej, Miroslaw; Fitzpatrick, Conor; Fiutowski, Tomasz; Fol, Philip; Fontana, Marianna; Fontanelli, Flavio; Forty, Roger; Francisco, Oscar; Frank, Markus; Frei, Christoph; Frosini, Maddalena; Fu, Jinlin; Furfaro, Emiliano; Gallas Torreira, Abraham; Galli, Domenico; Gallorini, Stefano; Gambetta, Silvia; Gandelman, Miriam; Gandini, Paolo; Gao, Yuanning; García Pardiñas, Julián; Garofoli, Justin; Garra Tico, Jordi; Garrido, Lluis; Gascon, David; Gaspar, Clara; Gauld, Rhorry; Gavardi, Laura; Geraci, Angelo; Gersabeck, Evelina; Gersabeck, Marco; Gershon, Timothy; Ghez, Philippe; Gianelle, Alessio; Gianì, Sebastiana; Gibson, Valerie; Giubega, Lavinia-Helena; Gligorov, V.V.; Göbel, Carla; Golubkov, Dmitry; Golutvin, Andrey; Gomes, Alvaro; Gotti, Claudio; Grabalosa Gándara, Marc; Graciani Diaz, Ricardo; Granado Cardoso, Luis Alberto; Graugés, Eugeni; Graziani, Giacomo; Grecu, Alexandru; Greening, Edward; Gregson, Sam; Griffith, Peter; Grillo, Lucia; Grünberg, Oliver; Gui, Bin; Gushchin, Evgeny; Guz, Yury; Gys, Thierry; Hadjivasiliou, Christos; Haefeli, Guido; Haen, Christophe; Haines, Susan; Hall, Samuel; Hamilton, Brian; Hampson, Thomas; Han, Xiaoxue; Hansmann-Menzemer, Stephanie; Harnew, Neville; Harnew, Samuel; Harrison, Jonathan; He, Jibo; Head, Timothy; Heijne, Veerle; Hennessy, Karol; Henrard, Pierre; Henry, Louis; Hernando Morata, Jose Angel; van Herwijnen, Eric; Heß, Miriam; Hicheur, Adlène; Hill, Donal; Hoballah, Mostafa; Hombach, Christoph; Hulsbergen, Wouter; Hunt, Philip; Hussain, Nazim; Hutchcroft, David; Hynds, Daniel; Idzik, Marek; Ilten, Philip; Jacobsson, Richard; Jaeger, Andreas; Jalocha, Pawel; Jans, Eddy; Jaton, Pierre; Jawahery, Abolhassan; Jing, Fanfan; John, Malcolm; Johnson, Daniel; Jones, Christopher; Joram, Christian; Jost, Beat; Jurik, Nathan; Kandybei, Sergii; Kanso, Walaa; Karacson, Matthias; Karbach, Moritz; Karodia, Sarah; Kelsey, Matthew; Kenyon, Ian; Ketel, Tjeerd; Khanji, Basem; Khurewathanakul, Chitsanu; Klaver, Suzanne; Klimaszewski, Konrad; Kochebina, Olga; Kolpin, Michael; Komarov, Ilya; Koopman, Rose; Koppenburg, Patrick; Korolev, Mikhail; Kozlinskiy, Alexandr; Kravchuk, Leonid; Kreplin, Katharina; Kreps, Michal; Krocker, Georg; Krokovny, Pavel; Kruse, Florian; Kucewicz, Wojciech; Kucharczyk, Marcin; Kudryavtsev, Vasily; Kurek, Krzysztof; Kvaratskheliya, Tengiz; La Thi, Viet Nga; Lacarrere, Daniel; Lafferty, George; Lai, Adriano; Lambert, Dean; Lambert, Robert W; Lanfranchi, Gaia; Langenbruch, Christoph; Langhans, Benedikt; Latham, Thomas; Lazzeroni, Cristina; Le Gac, Renaud; van Leerdam, Jeroen; Lees, Jean-Pierre; Lefèvre, Regis; Leflat, Alexander; Lefrançois, Jacques; Leo, Sabato; Leroy, Olivier; Lesiak, Tadeusz; Leverington, Blake; Li, Yiming; Likhomanenko, Tatiana; Liles, Myfanwy; Lindner, Rolf; Linn, Christian; Lionetto, Federica; Liu, Bo; Lohn, Stefan; Longstaff, Iain; Lopes, Jose; Lopez-March, Neus; Lowdon, Peter; Lu, Haiting; Lucchesi, Donatella; Luo, Haofei; Lupato, Anna; Luppi, Eleonora; Lupton, Oliver; Machefert, Frederic; Machikhiliyan, Irina V; Maciuc, Florin; Maev, Oleg; Malde, Sneha; Malinin, Alexander; Manca, Giulia; Mancinelli, Giampiero; Mapelli, Alessandro; Maratas, Jan; Marchand, Jean François; Marconi, Umberto; Marin Benito, Carla; Marino, Pietro; Märki, Raphael; Marks, Jörg; Martellotti, Giuseppe; Martens, Aurelien; Martín Sánchez, Alexandra; Martinelli, Maurizio; Martinez Santos, Diego; Martinez Vidal, Fernando; Martins Tostes, Danielle; Massafferri, André; Matev, Rosen; Mathe, Zoltan; Matteuzzi, Clara; Maurin, Brice; Mazurov, Alexander; McCann, Michael; McCarthy, James; McNab, Andrew; McNulty, Ronan; McSkelly, Ben; Meadows, Brian; Meier, Frank; Meissner, Marco; Merk, Marcel; Milanes, Diego Alejandro; Minard, Marie-Noelle; Moggi, Niccolò; Molina Rodriguez, Josue; Monteil, Stephane; Morandin, Mauro; Morawski, Piotr; Mordà, Alessandro; Morello, Michael Joseph; Moron, Jakub; Morris, Adam Benjamin; Mountain, Raymond; Muheim, Franz; Müller, Katharina; Mussini, Manuel; Muster, Bastien; Naik, Paras; Nakada, Tatsuya; Nandakumar, Raja; Nasteva, Irina; Needham, Matthew; Neri, Nicola; Neubert, Sebastian; Neufeld, Niko; Neuner, Max; Nguyen, Anh Duc; Nguyen, Thi-Dung; Nguyen-Mau, Chung; Nicol, Michelle; Niess, Valentin; Niet, Ramon; Nikitin, Nikolay; Nikodem, Thomas; Novoselov, Alexey; O'Hanlon, Daniel Patrick; Oblakowska-Mucha, Agnieszka; Obraztsov, Vladimir; Oggero, Serena; Ogilvy, Stephen; Okhrimenko, Oleksandr; Oldeman, Rudolf; Onderwater, Gerco; Orlandea, Marius; Otalora Goicochea, Juan Martin; Owen, Patrick; Oyanguren, Maria Arantza; Pal, Bilas Kanti; Palano, Antimo; Palombo, Fernando; Palutan, Matteo; Panman, Jacob; Papanestis, Antonios; Pappagallo, Marco; Pappalardo, Luciano; Parkes, Christopher; Parkinson, Christopher John; Passaleva, Giovanni; Patel, Girish; Patel, Mitesh; Patrignani, Claudia; Pearce, Alex; Pellegrino, Antonio; Pepe Altarelli, Monica; Perazzini, Stefano; Perret, Pascal; Perrin-Terrin, Mathieu; Pescatore, Luca; Pesen, Erhan; Pessina, Gianluigi; Petridis, Konstantin; Petrolini, Alessandro; Picatoste Olloqui, Eduardo; Pietrzyk, Boleslaw; Pilař, Tomas; Pinci, Davide; Pistone, Alessandro; Playfer, Stephen; Plo Casasus, Maximo; Polci, Francesco; Poluektov, Anton; Polycarpo, Erica; Popov, Alexander; Popov, Dmitry; Popovici, Bogdan; Potterat, Cédric; Price, Eugenia; Price, Joseph David; Prisciandaro, Jessica; Pritchard, Adrian; Prouve, Claire; Pugatch, Valery; Puig Navarro, Albert; Punzi, Giovanni; Qian, Wenbin; Rachwal, Bartolomiej; Rademacker, Jonas; Rakotomiaramanana, Barinjaka; Rama, Matteo; Rangel, Murilo; Raniuk, Iurii; Rauschmayr, Nathalie; Raven, Gerhard; Redi, Federico; Reichert, Stefanie; Reid, Matthew; dos Reis, Alberto; Ricciardi, Stefania; Richards, Sophie; Rihl, Mariana; Rinnert, Kurt; Rives Molina, Vincente; Robbe, Patrick; Rodrigues, Ana Barbara; Rodrigues, Eduardo; Rodriguez Perez, Pablo; Roiser, Stefan; Romanovsky, Vladimir; Romero Vidal, Antonio; Rotondo, Marcello; Rouvinet, Julien; Ruf, Thomas; Ruiz, Hugo; Ruiz Valls, Pablo; Saborido Silva, Juan Jose; Sagidova, Naylya; Sail, Paul; Saitta, Biagio; Salustino Guimaraes, Valdir; Sanchez Mayordomo, Carlos; Sanmartin Sedes, Brais; Santacesaria, Roberta; Santamarina Rios, Cibran; Santovetti, Emanuele; Sarti, Alessio; Satriano, Celestina; Satta, Alessia; Saunders, Daniel Martin; Savrina, Darya; Schiller, Manuel; Schindler, Heinrich; Schlupp, Maximilian; Schmelling, Michael; Schmidt, Burkhard; Schneider, Olivier; Schopper, Andreas; Schubiger, Maxime; Schune, Marie Helene; Schwemmer, Rainer; Sciascia, Barbara; Sciubba, Adalberto; Semennikov, Alexander; Sepp, Indrek; Serra, Nicola; Serrano, Justine; Sestini, Lorenzo; Seyfert, Paul; Shapkin, Mikhail; Shapoval, Illya; Shcheglov, Yury; Shears, Tara; Shekhtman, Lev; Shevchenko, Vladimir; Shires, Alexander; Silva Coutinho, Rafael; Simi, Gabriele; Sirendi, Marek; Skidmore, Nicola; Skwarnicki, Tomasz; Smith, Anthony; Smith, Edmund; Smith, Eluned; Smith, Jackson; Smith, Mark; Snoek, Hella; Sokoloff, Michael; Soler, Paul; Soomro, Fatima; Souza, Daniel; Souza De Paula, Bruno; Spaan, Bernhard; Sparkes, Ailsa; Spradlin, Patrick; Sridharan, Srikanth; Stagni, Federico; Stahl, Marian; Stahl, Sascha; Steinkamp, Olaf; Stenyakin, Oleg; Stevenson, Scott; Stoica, Sabin; Stone, Sheldon; Storaci, Barbara; Stracka, Simone; Straticiuc, Mihai; Straumann, Ulrich; Stroili, Roberto; Subbiah, Vijay Kartik; Sun, Liang; Sutcliffe, William; Swientek, Krzysztof; Swientek, Stefan; Syropoulos, Vasileios; Szczekowski, Marek; Szczypka, Paul; Szumlak, Tomasz; T'Jampens, Stephane; Teklishyn, Maksym; Tellarini, Giulia; Teubert, Frederic; Thomas, Christopher; Thomas, Eric; van Tilburg, Jeroen; Tisserand, Vincent; Tobin, Mark; Tolk, Siim; Tomassetti, Luca; Tonelli, Diego; Topp-Joergensen, Stig; Torr, Nicholas; Tournefier, Edwige; Tourneur, Stephane; Tran, Minh Tâm; Tresch, Marco; Trisovic, Ana; Tsaregorodtsev, Andrei; Tsopelas, Panagiotis; Tuning, Niels; Ubeda Garcia, Mario; Ukleja, Artur; Ustyuzhanin, Andrey; Uwer, Ulrich; Vacca, Claudia; Vagnoni, Vincenzo; Valenti, Giovanni; Vallier, Alexis; Vazquez Gomez, Ricardo; Vazquez Regueiro, Pablo; Vázquez Sierra, Carlos; Vecchi, Stefania; Velthuis, Jaap; Veltri, Michele; Veneziano, Giovanni; Vesterinen, Mika; Viaud, Benoit; Vieira, Daniel; Vieites Diaz, Maria; Vilasis-Cardona, Xavier; Vollhardt, Achim; Volyanskyy, Dmytro; Voong, David; Vorobyev, Alexey; Vorobyev, Vitaly; Voß, Christian; de Vries, Jacco; Waldi, Roland; Wallace, Charlotte; Wallace, Ronan; Walsh, John; Wandernoth, Sebastian; Wang, Jianchun; Ward, David; Watson, Nigel; Websdale, David; Whitehead, Mark; Wicht, Jean; Wiedner, Dirk; Wilkinson, Guy; Williams, Matthew; Williams, Mike; Wilschut, Hans; Wilson, Fergus; Wimberley, Jack; Wishahi, Julian; Wislicki, Wojciech; Witek, Mariusz; Wormser, Guy; Wotton, Stephen; Wright, Simon; Wyllie, Kenneth; Xie, Yuehong; Xing, Zhou; Xu, Zhirui; Yang, Zhenwei; Yuan, Xuhao; Yushchenko, Oleg; Zangoli, Maria; Zavertyaev, Mikhail; Zhang, Liming; Zhang, Wen Chao; Zhang, Yanxi; Zhelezov, Alexey; Zhokhov, Anatoly; Zhong, Liang; Zvyagin, Alexander
2014-12-05
Measuring cross-sections at the LHC requires the luminosity to be determined accurately at each centre-of-mass energy $\\sqrt{s}$. In this paper results are reported from the luminosity calibrations carried out at the LHC interaction point 8 with the LHCb detector for $\\sqrt{s}$ = 2.76, 7 and 8 TeV (proton-proton collisions) and for $\\sqrt{s_{NN}}$ = 5 TeV (proton-lead collisions). Both the "van der Meer scan" and "beam-gas imaging" luminosity calibration methods were employed. It is observed that the beam density profile cannot always be described by a function that is factorizable in the two transverse coordinates. The introduction of a two-dimensional description of the beams improves significantly the consistency of the results. For proton-proton interactions at $\\sqrt{s}$ = 8 TeV a relative precision of the luminosity calibration of 1.47% is obtained using van der Meer scans and 1.43% using beam-gas imaging, resulting in a combined precision of 1.12%. Applying the calibration to the full data set determin...
Laser precision microfabrication in Japan
Miyamoto, Isamu; Ooie, Toshihiko; Takeno, Shozui
2000-11-01
Electronic devices such as handy phones and micro computers have been rapidly expanding their market recent years due to their enhanced performance, down sizing and cost down. This has been realized by the innovation in the precision micro- fabrication technology of semiconductors and printed wiring circuit boards (PWB) where laser technologies such as lithography, drilling, trimming, welding and soldering play an important role. In phot lithography, for instance, KrF excimer lasers having a resolution of 0.18 micrometers has been used in production instead of mercury lamp. Laser drilling of PWB has been increased up to over 1000 holes per second, and approximately 800 laser drilling systems of PWB are expected to be delivered in the world market this year, and most of these laser processing systems are manufactured in Japan. Trend of laser micro-fabrication in Japanese industry is described along with recent topics of R&D, government supported project and future tasks of industrial laser precision micro-fabrication on the basis of the survey conducted by Japan laser Processing Society.
Antihydrogen production and precision experiments
Nieto, M.M.; Goldman, T.; Holzscheiter, M.H.
1996-01-01
The study of CPT invariance with the highest achievable precision in all particle sectors is of fundamental importance for physics. Equally important is the question of the gravitational acceleration of antimatter. In recent years, impressive progress has been achieved in capturing antiprotons in specially designed Penning traps, in cooling them to energies of a few milli-electron volts, and in storing them for hours in a small volume of space. Positrons have been accumulated in large numbers in similar traps, and low energy positron or positronium beams have been generated. Finally, steady progress has been made in trapping and cooling neutral atoms. Thus the ingredients to form antihydrogen at rest are at hand. Once antihydrogen atoms have been captured at low energy, spectroscopic methods can be applied to interrogate their atomic structure with extremely high precision and compare it to its normal matter counterpart, the hydrogen atom. Especially the 1S-2S transition, with a lifetime of the excited state of 122 msec and thereby a natural linewidth of 5 parts in 10 16 , offers in principle the possibility to directly compare matter and antimatter properties at a level of 1 part in 10 16
Laser fusion and precision engineering
Nakai, Sadao
1989-01-01
The development of laser nuclear fusion energy for attaining the self supply of energy in Japan and establishing the future perspective as the nation is based in the wide fields of high level science and technology. Therefore to its promotion, large expectation is placed as the powerful traction for the development of creative science and technology which are particularly necessary in Japan. The research on laser nuclear fusion advances steadily in the elucidation of the physics of pellet implosion which is its basic concept and compressed plasma parameters. In September, 1986, the number of neutron generation 10 13 , and in October, 1988, the high density compression 600 times as high as solid density have been achieved. Based on these results, now the laser nuclear fusion is in the situation to begin the attainment of ignition condition for nuclear fusion and the realization of break even. The optical components, high power laser technology, fuel pellet production, high resolution measurement, the simulation of implosion using a supercomputer and so on are closely related to precision engineering. In this report, the mechanism of laser nuclear fusion, the present status of its research, and the basic technologies and precision engineering are described. (K.I.)
Spectral theories for linear differential equations
Sell, G.R.
1976-01-01
The use of spectral analysis in the study of linear differential equations with constant coefficients is not only a fundamental technique but also leads to far-reaching consequences in describing the qualitative behaviour of the solutions. The spectral analysis, via the Jordan canonical form, will not only lead to a representation theorem for a basis of solutions, but will also give a rather precise statement of the (exponential) growth rates of various solutions. Various attempts have been made to extend this analysis to linear differential equations with time-varying coefficients. The most complete such extensions is the Floquet theory for equations with periodic coefficients. For time-varying linear differential equations with aperiodic coefficients several authors have attempted to ''extend'' the Foquet theory. The precise meaning of such an extension is itself a problem, and we present here several attempts in this direction that are related to the general problem of extending the spectral analysis of equations with constant coefficients. The main purpose of this paper is to introduce some problems of current research. The primary problem we shall examine occurs in the context of linear differential equations with almost periodic coefficients. We call it ''the Floquet problem''. (author)
CLIC e+e- Linear Collider Studies
Dannheim, Dominik; Linssen, Lucie; Schulte, Daniel; Simon, Frank; Stapnes, Steinar; Toge, Nobukazu; Weerts, Harry; Wells, James
2012-01-01
This document provides input from the CLIC e+e- linear collider studies to the update process of the European Strategy for Particle Physics. It is submitted on behalf of the CLIC/CTF3 collaboration and the CLIC physics and detector study. It describes the exploration of fundamental questions in particle physics at the energy frontier with a future TeV-scale e+e- linear collider based on the Compact Linear Collider (CLIC) two-beam acceleration technique. A high-luminosity high-energy e+e- collider allows for the exploration of Standard Model physics, such as precise measurements of the Higgs, top and gauge sectors, as well as for a multitude of searches for New Physics, either through direct discovery or indirectly, via high-precision observables. Given the current state of knowledge, following the observation of a \\sim125 GeV Higgs-like particle at the LHC, and pending further LHC results at 8 TeV and 14 TeV, a linear e+e- collider built and operated in centre-of-mass energy stages from a few-hundred GeV up t...
Handbook on linear motor application
1988-10-01
This book guides the application for Linear motor. It lists classification and speciality of Linear Motor, terms of linear-induction motor, principle of the Motor, types on one-side linear-induction motor, bilateral linear-induction motor, linear-DC Motor on basic of the motor, linear-DC Motor for moving-coil type, linear-DC motor for permanent-magnet moving type, linear-DC motor for electricity non-utility type, linear-pulse motor for variable motor, linear-pulse motor for permanent magneto type, linear-vibration actuator, linear-vibration actuator for moving-coil type, linear synchronous motor, linear electromagnetic motor, linear electromagnetic solenoid, technical organization and magnetic levitation and linear motor and sensor.
Precision tests of the standard model at LEP
Mele, Barbara; Universita La Sapienza, Rome
1994-01-01
Recent LEP results on electroweak precision measurements are reviewed. Line-shape and asymmetries analysis on the Z 0 peak is described. Then, the consistency of the Standard Model predictions with experimental data and consequent limits on the top mass are discussed. Finally, the possibility of extracting information and constrains on new theoretical models from present data is examined. (author). 20 refs., 5 tabs
Precision Measurement and Improvement of e+, e- Storage Rings
Yan, Y.T.; Cai, Y.; Colocho, W.; Decker, F-J.; Seeman, J.; Sullivan, M.; Turner, J.; Wienands, U.; Woodley, M.; Yocky, G.
2006-01-01
Through horizontal and vertical excitations, we have been able to make a precision measurement of linear geometric optics parameters with a Model-Independent Analysis (MIA). We have also been able to build up a computer model that matches the real accelerator in linear geometric optics with an SVD-enhanced Least-square fitting process. Recently, with the addition of longitudinal excitation, we are able to build up a computer virtual machine that matches the real accelerators in linear optics including dispersion without additional fitting variables. With this optics-matched virtual machine, we are able to find solutions that make changes of selected normal and skew quadrupoles for machine optics improvement. It has made major contributions to improve PEP-II optics and luminosity. Examples from application to PEP-II machines will be presented
Linear regression in astronomy. II
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Optical surfacing via linear ion source
Wu, Lixiang; Wei, Chaoyang; Shao, Jianda
2017-01-01
We present a concept of surface decomposition extended from double Fourier series to nonnegative sinusoidal wave surfaces, on the basis of which linear ion sources apply to the ultra-precision fabrication of complex surfaces and diffractive optics. The modified Fourier series, or sinusoidal wave surfaces, build a relationship between the fabrication process of optical surfaces and the surface characterization based on power spectral density (PSD) analysis. Also, we demonstrate that the one-dimensional scanning of linear ion source is applicable to the removal of mid-spatial frequency (MSF) errors caused by small-tool polishing in raster scan mode as well as the fabrication of beam sampling grating of high diffractive uniformity without a post-processing procedure. The simulation results show that optical fabrication with linear ion source is feasible and even of higher output efficiency compared with the conventional approach.
Optical surfacing via linear ion source
Wu, Lixiang, E-mail: wulx@hdu.edu.cn [Key Lab of RF Circuits and Systems of Ministry of Education, Zhejiang Provincial Key Lab of LSI Design, Microelectronics CAD Center, College of Electronics and Information, Hangzhou Dianzi University, Hangzhou (China); Wei, Chaoyang, E-mail: siomwei@siom.ac.cn [Key Laboratory of Materials for High Power Laser, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800 (China); Shao, Jianda, E-mail: jdshao@siom.ac.cn [Key Laboratory of Materials for High Power Laser, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800 (China)
2017-04-15
We present a concept of surface decomposition extended from double Fourier series to nonnegative sinusoidal wave surfaces, on the basis of which linear ion sources apply to the ultra-precision fabrication of complex surfaces and diffractive optics. The modified Fourier series, or sinusoidal wave surfaces, build a relationship between the fabrication process of optical surfaces and the surface characterization based on power spectral density (PSD) analysis. Also, we demonstrate that the one-dimensional scanning of linear ion source is applicable to the removal of mid-spatial frequency (MSF) errors caused by small-tool polishing in raster scan mode as well as the fabrication of beam sampling grating of high diffractive uniformity without a post-processing procedure. The simulation results show that optical fabrication with linear ion source is feasible and even of higher output efficiency compared with the conventional approach.
Linear ubiquitination in immunity.
Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning
2015-07-01
Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types. © 2015 The Authors. Immunological Reviews Published by John Wiley & Sons Ltd.
Doorn, J; Storteboom, T T R; Mulder, A M; de Jong, W H A; Rottier, B L; Kema, I P
2015-07-01
Measurement of chloride in sweat is an essential part of the diagnostic algorithm for cystic fibrosis. The lack in sensitivity and reproducibility of current methods led us to develop an ion chromatography/high-performance liquid chromatography (IC/HPLC) method, suitable for the analysis of both chloride and sodium in small volumes of sweat. Precision, linearity and limit of detection of an in-house developed IC/HPLC method were established. Method comparison between the newly developed IC/HPLC method and the traditional Chlorocounter was performed, and trueness was determined using Passing Bablok method comparison with external quality assurance material (Royal College of Pathologists of Australasia). Precision and linearity fulfill criteria as established by UK guidelines are comparable with inductively coupled plasma-mass spectrometry methods. Passing Bablok analysis demonstrated excellent correlation between IC/HPLC measurements and external quality assessment target values, for both chloride and sodium. With a limit of quantitation of 0.95 mmol/L, our method is suitable for the analysis of small amounts of sweat and can thus be used in combination with the Macroduct collection system. Although a chromatographic application results in a somewhat more expensive test compared to a Chlorocounter test, more accurate measurements are achieved. In addition, simultaneous measurements of sodium concentrations will result in better detection of false positives, less test repeating and thus faster and more accurate and effective diagnosis. The described IC/HPLC method, therefore, provides a precise, relatively cheap and easy-to-handle application for the analysis of both chloride and sodium in sweat. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Sapinski, M.
2012-01-01
With thirteen beam induced quenches and numerous Machine Development tests, the current knowledge of LHC magnets quench limits still contains a lot of unknowns. Various approaches to determine the quench limits are reviewed and results of the tests are presented. Attempt to reconstruct a coherent picture emerging from these results is taken. The available methods of computation of the quench levels are presented together with dedicated particle shower simulations which are necessary to understand the tests. The future experiments, needed to reach better understanding of quench limits as well as limits for the machine operation are investigated. The possible strategies to set BLM (Beam Loss Monitor) thresholds are discussed. (author)
Sharpe, Michael B.; Moseley, Douglas J.; Purdie, Thomas G.
2006-01-01
The geometric accuracy and precision of an image-guided treatment system were assessed. Image guidance is performed using an x-ray volume imaging (XVI) system integrated with a linear accelerator and treatment planning system. Using an amorphous silicon detector and x-ray tube, volumetric computed tomography images are reconstructed from kilovoltage radiographs by filtered backprojection. Image fusion and assessment of geometric targeting are supported by the treatment planning system. To assess the limiting accuracy and precision of image-guided treatment delivery, a rigid spherical target embedded in an opaque phantom was subjected to 21 treatment sessions over a three-month period. For each session, a volumetric data set was acquired and loaded directly into an active treatment planning session. Image fusion was used to ascertain the couch correction required to position the target at the prescribed iso-center. Corrections were validated independently using megavoltage electronic portal imaging to record the target position with respect to symmetric treatment beam apertures. An initial calibration cycle followed by repeated image-guidance sessions demonstrated the XVI system could be used to relocate an unambiguous object to within less than 1 mm of the prescribed location. Treatment could then proceed within the mechanical accuracy and precision of the delivery system. The calibration procedure maintained excellent spatial resolution and delivery precision over the duration of this study, while the linear accelerator was in routine clinical use. Based on these results, the mechanical accuracy and precision of the system are ideal for supporting high-precision localization and treatment of soft-tissue targets
Computer-determined assay time based on preset precision
Foster, L.A.; Hagan, R.; Martin, E.R.; Wachter, J.R.; Bonner, C.A.; Malcom, J.E.
1994-01-01
Most current assay systems for special nuclear materials (SNM) operate on the principle of a fixed assay time which provides acceptable measurement precision without sacrificing the required throughput of the instrument. Waste items to be assayed for SNM content can contain a wide range of nuclear material. Counting all items for the same preset assay time results in a wide range of measurement precision and wastes time at the upper end of the calibration range. A short time sample taken at the beginning of the assay could optimize the analysis time on the basis of the required measurement precision. To illustrate the technique of automatically determining the assay time, measurements were made with a segmented gamma scanner at the Plutonium Facility of Los Alamos National Laboratory with the assay time for each segment determined by counting statistics in that segment. Segments with very little SNM were quickly determined to be below the lower limit of the measurement range and the measurement was stopped. Segments with significant SNM were optimally assays to the preset precision. With this method the total assay time for each item is determined by the desired preset precision. This report describes the precision-based algorithm and presents the results of measurements made to test its validity
Validating precision--how many measurements do we need?
ÅSberg, Arne; Solem, Kristine Bodal; Mikkelsen, Gustav
2015-10-01
A quantitative analytical method should be sufficiently precise, i.e. the imprecision measured as a standard deviation should be less than the numerical definition of the acceptable standard deviation. We propose that the entire 90% confidence interval for the true standard deviation shall lie below the numerical definition of the acceptable standard deviation in order to assure that the analytical method is sufficiently precise. We also present power function curves to ease the decision on the number of measurements to make. Computer simulation was used to calculate the probability that the upper limit of the 90% confidence interval for the true standard deviation was equal to or exceeded the acceptable standard deviation. Power function curves were constructed for different scenarios. The probability of failure to assure that the method is sufficiently precise increases with decreasing number of measurements and with increasing standard deviation when the true standard deviation is well below the acceptable standard deviation. For instance, the probability of failure is 42% for a precision experiment of 40 repeated measurements in one analytical run and 7% for 100 repeated measurements, when the true standard deviation is 80% of the acceptable standard deviation. Compared to the CLSI guidelines, validating precision according to the proposed principle is more reliable, but demands considerably more measurements. Using power function curves may help when planning studies to validate precision.
(No) Eternal inflation and precision Higgs physics
Arkani-Hamed, Nima; Dubovsky, Sergei; Senatore, Leonardo; Villadoro, Giovanni
2008-01-01
Even if nothing but a light Higgs is observed at the LHC, suggesting that the Standard Model is unmodified up to scales far above the weak scale, Higgs physics can yield surprises of fundamental significance for cosmology. As has long been known, the Standard Model vacuum may be metastable for low enough Higgs mass, but a specific value of the decay rate holds special significance: for a very narrow window of parameters, our Universe has not yet decayed but the current inflationary period can not be future eternal. Determining whether we are in this window requires exquisite but achievable experimental precision, with a measurement of the Higgs mass to 0.1 GeV at the LHC, the top mass to 60 MeV at a linear collider, as well as an improved determination of α s by an order of magnitude on the lattice. If the parameters are observed to lie in this special range, particle physics will establish that the future of our Universe is a global big crunch, without harboring pockets of eternal inflation, strongly suggesting that eternal inflation is censored by the fundamental theory. This conclusion could be drawn even more sharply if metastability with the appropriate decay rate is found in the MSSM, where the physics governing the instability can be directly probed at the TeV scale
The Precision Field Lysimeter Concept
Fank, J.
2009-04-01
The understanding and interpretation of leaching processes have improved significantly during the past decades. Unlike laboratory experiments, which are mostly performed under very controlled conditions (e.g. homogeneous, uniform packing of pre-treated test material, saturated steady-state flow conditions, and controlled uniform hydraulic conditions), lysimeter experiments generally simulate actual field conditions. Lysimeters may be classified according to different criteria such as type of soil block used (monolithic or reconstructed), drainage (drainage by gravity or vacuum or a water table may be maintained), or weighing or non-weighing lysimeters. In 2004 experimental investigations have been set up to assess the impact of different farming systems on groundwater quality of the shallow floodplain aquifer of the river Mur in Wagna (Styria, Austria). The sediment is characterized by a thin layer (30 - 100 cm) of sandy Dystric Cambisol and underlying gravel and sand. Three precisely weighing equilibrium tension block lysimeters have been installed in agricultural test fields to compare water flow and solute transport under (i) organic farming, (ii) conventional low input farming and (iii) extensification by mulching grass. Specific monitoring equipment is used to reduce the well known shortcomings of lysimeter investigations: The lysimeter core is excavated as an undisturbed monolithic block (circular, 1 m2 surface area, 2 m depth) to prevent destruction of the natural soil structure, and pore system. Tracing experiments have been achieved to investigate the occurrence of artificial preferential flow and transport along the walls of the lysimeters. The results show that such effects can be neglected. Precisely weighing load cells are used to constantly determine the weight loss of the lysimeter due to evaporation and transpiration and to measure different forms of precipitation. The accuracy of the weighing apparatus is 0.05 kg, or 0.05 mm water equivalent
Krivonos, S.O.; Sorin, A.S.
1994-06-01
We show that the Zamolodchikov's and Polyakov-Bershadsky nonlinear algebras W 3 and W (2) 3 can be embedded as subalgebras into some linear algebras with finite set of currents. Using these linear algebras we find new field realizations of W (2) 3 and W 3 which could be a starting point for constructing new versions of W-string theories. We also reveal a number of hidden relationships between W 3 and W (2) 3 . We conjecture that similar linear algebras can exist for other W-algebra as well. (author). 10 refs
Schneider, Hans
1989-01-01
Linear algebra is one of the central disciplines in mathematics. A student of pure mathematics must know linear algebra if he is to continue with modern algebra or functional analysis. Much of the mathematics now taught to engineers and physicists requires it.This well-known and highly regarded text makes the subject accessible to undergraduates with little mathematical experience. Written mainly for students in physics, engineering, economics, and other fields outside mathematics, the book gives the theory of matrices and applications to systems of linear equations, as well as many related t
Linearity in Process Languages
Nygaard, Mikkel; Winskel, Glynn
2002-01-01
The meaning and mathematical consequences of linearity (managing without a presumed ability to copy) are studied for a path-based model of processes which is also a model of affine-linear logic. This connection yields an affine-linear language for processes, automatically respecting open......-map bisimulation, in which a range of process operations can be expressed. An operational semantics is provided for the tensor fragment of the language. Different ways to make assemblies of processes lead to different choices of exponential, some of which respect bisimulation....
Amir-Moez, A R; Sneddon, I N
1962-01-01
Elements of Linear Space is a detailed treatment of the elements of linear spaces, including real spaces with no more than three dimensions and complex n-dimensional spaces. The geometry of conic sections and quadric surfaces is considered, along with algebraic structures, especially vector spaces and transformations. Problems drawn from various branches of geometry are given.Comprised of 12 chapters, this volume begins with an introduction to real Euclidean space, followed by a discussion on linear transformations and matrices. The addition and multiplication of transformations and matrices a
Weisberg, Sanford
2013-01-01
Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus
Precision cosmology and the landscape
Bousso, Raphael; Bousso, Raphael
2006-01-01
After reviewing the cosmological constant problem--why is Lambda not huge?--I outline the two basic approaches that had emerged by the late 1980s, and note that each made a clear prediction. Precision cosmological experiments now indicate that the cosmological constant is nonzero. This result strongly favors the environmental approach, in which vacuum energy can vary discretely among widely separated regions in the universe. The need to explain this variation from first principles constitutes an observational constraint on fundamental theory. I review arguments that string theory satisfies this constraint, as it contains a dense discretuum of metastable vacua. The enormous landscape of vacua calls for novel, statistical methods of deriving predictions, and it prompts us to reexamine our description of spacetime on the largest scales. I discuss the effects of cosmological dynamics, and I speculate that weighting vacua by their entropy production may allow for prior-free predictions that do not resort to explicitly anthropic arguments
Beetham, C G
1999-01-01
For the past decade, the Global Positioning System (GPS) has been used to provide precise time, frequency and position co-ordinates world-wide. Recently, equipment has become available specialising in providing extremely accurate timing information, referenced to Universal Time Co-ordinates (UTC). This feature has been used at CERN to provide time of day information for systems that have been installed in the Proton Synchrotron (PS), Super Proton Synchrotron (SPS) and the Large Electron Positron (LEP) machines. The different systems are described as well as the planned developments, particularly with respect to optical transmission and the Inter-Range Instrumentation Group IRIG-B standard, for future use in the Large Hadron Collider (LHC).
Collisional damping of Langmuir waves in the collisionless limit
Auerbach, S.P.
1977-01-01
Linear Langmuir wave damping by collisions is studied in the limit of collision frequency ν approaching zero. In this limit, collisions are negligible, except in a region in velocity space, the boundary layer, centered about the phase velocity. If kappa, the ratio of the collisional equilibration time in the boundary layer to the Landau damping time, is small, the boundary layer width scales as ν/sup 1/3/, and the perturbed distribution function scales as ν/sup -1/3/. The damping rate is thus independent of ν, although essentially all the damping occurs in the collision-dominated boundary layer. Solution of the Fokker--Planck equation shows that the damping rate is precisely the Landau (collisionless) rate. The damping rate is independent of kappa, although the boundary layer thickness is not
A Precision Measurement of the Spin Structure Function G(2)(P)
Benmouna, N
2004-01-05
The spin structure function g{sub 2}(x,Q{sup 2}) and the virtual photon asymmetry A{sub 2}(x,Q{sup 2}) were measured for the proton using deep inelastic scattering. The experiment was conducted at the Stanford Linear Accelerator Center (SLAC), where longitudinally polarized electrons at 29.1 and 32.3 GeV were scattered from a transversely polarized NH{sub 3} target. Large data sets were accumulated using three independent spectrometers covering a kinematic range 0.02 {le} x {le} 0.8 and 1 {le} Q{sup 2} {le} 20 (GeV/c){sup 2}. This new data is the first data precise enough to distinguish between current models for the proton. The structure function g{sub 2}{sup p} was found to be reasonably consistent with the twist-2 Wandzura-Wilczek calculation. The Q{sup 2} dependence of g{sub 2} approximately follows the Q{sup 2} dependence of g{sub 2}{sup WW}, although the data are not precise enough to rule out no Q{sup 2} dependence. The absolute value for A{sub 2}{sup p} was found to be significantly smaller than the Soffer limit over the measured range. The virtual photon asymmetry A{sub 2} was also found to be inconsistent with zero over much of the measured range.
Fitoussi, L.
1987-12-01
The dose limit is defined to be the level of harmfulness which must not be exceeded, so that an activity can be exercised in a regular manner without running a risk unacceptable to man and the society. The paper examines the effects of radiation categorised into stochastic and non-stochastic. Dose limits for workers and the public are discussed
Callier, Frank M.; Desoer, Charles A.
1991-01-01
The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.
Cosmological perturbations beyond linear order
CERN. Geneva
2013-01-01
Cosmological perturbation theory is the standard tool to understand the formation of the large scale structure in the Universe. However, its degree of applicability is limited by the growth of the amplitude of the matter perturbations with time. This problem can be tackled with by using N-body simulations or analytical techniques that go beyond the linear calculation. In my talk, I'll summarise some recent efforts in the latter that ameliorate the bad convergence of the standard perturbative expansion. The new techniques allow better analytical control on observables (as the matter power spectrum) over scales very relevant to understand the expansion history and formation of structure in the Universe.
Development of sensor guided precision sprayers
Nieuwenhuizen, A.T.; Zande, van de J.C.
2013-01-01
Sensor guided precision sprayers were developed to automate the spray process with a focus on emission reduction and identical or increased efficacy, with the precision agriculture concept in mind. Within the project “Innovations2” sensor guided precision sprayers were introduced to leek,
Precision production: enabling deterministic throughput for precision aspheres with MRF
Maloney, Chris; Entezarian, Navid; Dumas, Paul
2017-10-01
Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.
Updating Linear Schedules with Lowest Cost: a Linear Programming Model
Biruk, Sławomir; Jaśkowski, Piotr; Czarnigowska, Agata
2017-10-01
Many civil engineering projects involve sets of tasks repeated in a predefined sequence in a number of work areas along a particular route. A useful graphical representation of schedules of such projects is time-distance diagrams that clearly show what process is conducted at a particular point of time and in particular location. With repetitive tasks, the quality of project performance is conditioned by the ability of the planner to optimize workflow by synchronizing the works and resources, which usually means that resources are planned to be continuously utilized. However, construction processes are prone to risks, and a fully synchronized schedule may expire if a disturbance (bad weather, machine failure etc.) affects even one task. In such cases, works need to be rescheduled, and another optimal schedule should be built for the changed circumstances. This typically means that, to meet the fixed completion date, durations of operations have to be reduced. A number of measures are possible to achieve such reduction: working overtime, employing more resources or relocating resources from less to more critical tasks, but they all come at a considerable cost and affect the whole project. The paper investigates the problem of selecting the measures that reduce durations of tasks of a linear project so that the cost of these measures is kept to the minimum and proposes an algorithm that could be applied to find optimal solutions as the need to reschedule arises. Considering that civil engineering projects, such as road building, usually involve less process types than construction projects, the complexity of scheduling problems is lower, and precise optimization algorithms can be applied. Therefore, the authors put forward a linear programming model of the problem and illustrate its principle of operation with an example.
Morphologies of precise polyethylene-based acid copolymers and ionomers
Buitrago, C. Francisco
Acid copolymers and ionomers are polymers that contain a small fraction of covalently bound acidic or ionic groups, respectively. For the specific case of polyethylene (PE), acid and ionic pendants enhance many of the physical properties such as toughness, adhesion and rheological properties. These improved properties result from microphase separated aggregates of the polar pendants in the non-polar PE matrix. Despite the widespread industrial use of these materials, rigorous chemical structure---morphology---property relationships remain elusive due to the inevitable structural heterogeneities in the historically-available acid copolymers and ionomers. Recently, precise acid copolymers and ionomers were successfully synthesized by acyclic diene metathesis (ADMET) polymerization. These precise materials are linear, high molecular weight PEs with pendant acid or ionic functional groups separated by a precisely controlled number of carbon atoms. The morphologies of nine precise acid copolymers and eleven precise ionomers were investigated by X-ray scattering, solid-state 13C nuclear magnetic resonance (NMR) and differential scanning calorimetry (DSC). For comparison, the morphologies of linear PEs with pseudo-random placement of the pendant groups were also studied. Previous studies of precise copolymers with acrylic acid (AA) found that the microstructural precision produces a new morphology in which PE crystals drive the acid aggregates into layers perpendicular to the chain axes and presumably at the interface between crystalline and amorphous phases. In this dissertation, a second new morphology for acid copolymers is identified in which the aggregates arrange on cubic lattices. The fist report of a cubic morphology was observed at room and elevated temperatures for a copolymer functionalized with two phosphonic acid (PA) groups on every 21st carbon atom. The cubic lattice has been identified as face-centered cubic (FCC). Overall, three morphology types have been
Precision medicine in pediatric oncology: Lessons learned and next steps
Mody, Rajen J.; Prensner, John R.; Everett, Jessica; Parsons, D. Williams; Chinnaiyan, Arul M.
2017-01-01
The maturation of genomic technologies has enabled new discoveries in disease pathogenesis as well as new approaches to patient care. In pediatric oncology, patients may now receive individualized genomic analysis to identify molecular aberrations of relevance for diagnosis and/or treatment. In this context, several recent clinical studies have begun to explore the feasibility and utility of genomics-driven precision medicine. Here, we review the major developments in this field, discuss current limitations, and explore aspects of the clinical implementation of precision medicine, which lack consensus. Lastly, we discuss ongoing scientific efforts in this arena, which may yield future clinical applications. PMID:27748023
Physics with e+e- Linear Colliders
Barklow, Timothy L
2003-01-01
We describe the physics potential of e + e - linear colliders in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosons and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, like compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders up to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e + e - linear colliders and the high-precision with which the properties of particles and their interactions can be analyzed, define an exciting physics programme complementary to hadron machines
Ingram, WT
2012-01-01
Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen
Toward precision holography with supersymmetric Wilson loops
Faraggi, Alberto [Instituto de Física, Pontificia Universidad Católica de Chile,Casilla 306, Santiago (Chile); Zayas, Leopoldo A. Pando [The Abdus Salam International Centre for Theoretical Physics,Strada Costiera 11, 34014 Trieste (Italy); Michigan Center for Theoretical Physics, Department of Physics,University of Michigan, Ann Arbor, MI 48109 (United States); Silva, Guillermo A. [Instituto de Física de La Plata - CONICET & Departamento de Física - UNLP,C.C. 67, 1900 La Plata (Argentina); Trancanelli, Diego [Institute of Physics, University of São Paulo,05314-970 São Paulo (Brazil)
2016-04-11
We consider certain 1/4 BPS Wilson loop operators in SU(N)N=4 supersymmetric Yang-Mills theory, whose expectation value can be computed exactly via supersymmetric localization. Holographically, these operators are mapped to fundamental strings in AdS{sub 5}×S{sup 5}. The string on-shell action reproduces the large N and large coupling limit of the gauge theory expectation value and, according to the AdS/CFT correspondence, there should also be a precise match between subleading corrections to these limits. We perform a test of such match at next-to-leading order in string theory, by deriving the spectrum of quantum fluctuations around the classical string solution and by computing the corresponding 1-loop effective action. We discuss in detail the supermultiplet structure of the fluctuations. To remove a possible source of ambiguity in the ghost zero mode measure, we compare the 1/4 BPS configuration with the 1/2 BPS one, dual to a circular Wilson loop. We find a discrepancy between the string theory result and the gauge theory prediction, confirming a previous result in the literature. We are able to track the modes from which this discrepancy originates, as well as the modes that by themselves would give the expected result.
Precise Point Positioning Using Triple GNSS Constellations in Various Modes
Akram Afifi
2016-05-01
Full Text Available This paper introduces a new dual-frequency precise point positioning (PPP model, which combines the observations from three different global navigation satellite system (GNSS constellations, namely GPS, Galileo, and BeiDou. Combining measurements from different GNSS systems introduces additional biases, including inter-system bias and hardware delays, which require rigorous modelling. Our model is based on the un-differenced and between-satellite single-difference (BSSD linear combinations. BSSD linear combination cancels out some receiver-related biases, including receiver clock error and non-zero initial phase bias of the receiver oscillator. Forming the BSSD linear combination requires a reference satellite, which can be selected from any of the GPS, Galileo, and BeiDou systems. In this paper three BSSD scenarios are tested; each considers a reference satellite from a different GNSS constellation. Natural Resources Canada’s GPSPace PPP software is modified to enable a combined GPS, Galileo, and BeiDou PPP solution and to handle the newly introduced biases. A total of four data sets collected at four different IGS stations are processed to verify the developed PPP model. Precise satellite orbit and clock products from the International GNSS Service Multi-GNSS Experiment (IGS-MGEX network are used to correct the GPS, Galileo, and BeiDou measurements in the post-processing PPP mode. A real-time PPP solution is also obtained, which is referred to as RT-PPP in the sequel, through the use of the IGS real-time service (RTS for satellite orbit and clock corrections. However, only GPS and Galileo observations are used for the RT-PPP solution, as the RTS-IGS satellite products are not presently available for BeiDou system. All post-processed and real-time PPP solutions are compared with the traditional un-differenced GPS-only counterparts. It is shown that combining the GPS, Galileo, and BeiDou observations in the post-processing mode improves the
Precision measurements with atom interferometry
Schubert, Christian; Abend, Sven; Schlippert, Dennis; Ertmer, Wolfgang; Rasel, Ernst M.
2017-04-01
Interferometry with matter waves enables precise measurements of rotations, accelerations, and differential accelerations [1-5]. This is exploited for determining fundamental constants [2], in fundamental science as e.g. testing the universality of free fall [3], and is applied for gravimetry [4], and gravity gradiometry [2,5]. At the Institut für Quantenoptik in Hannover, different approaches are pursued. A large scale device is designed and currently being set up to investigate the gain in precision for gravimetry, gradiometry, and fundamental tests on large baselines [6]. For field applications, a compact and transportable device is being developed. Its key feature is an atom chip source providing a collimated high flux of atoms which is expected to mitigate systematic uncertainties [7,8]. The atom chip technology and miniaturization benefits from microgravity experiments in the drop tower in Bremen and sounding rocket experiments [8,9] which act as pathfinders for space borne operation [10]. This contribution will report about our recent results. The presented work is supported by the CRC 1227 DQ-mat, the CRC 1128 geo-Q, the RTG 1729, the QUEST-LFS, and by the German Space Agency (DLR) with funds provided by the Federal Ministry of Economic Affairs and Energy (BMWi) due to an enactment of the German Bundestag under Grant No. DLR 50WM1552-1557. [1] P. Berg et al., Phys. Rev. Lett., 114, 063002, 2015; I. Dutta et al., Phys. Rev. Lett., 116, 183003, 2016. [2] J. B. Fixler et al., Science 315, 74 (2007); G. Rosi et al., Nature 510, 518, 2014. [3] D. Schlippert et al., Phys. Rev. Lett., 112, 203002, 2014. [4] A. Peters et al., Nature 400, 849, 1999; A. Louchet-Chauvet et al., New J. Phys. 13, 065026, 2011; C. Freier et al., J. of Phys.: Conf. Series 723, 012050, 2016. [5] J. M. McGuirk et al., Phys. Rev. A 65, 033608, 2002; P. Asenbaum et al., arXiv:1610.03832. [6] J. Hartwig et al., New J. Phys. 17, 035011, 2015. [7] H. Ahlers et al., Phys. Rev. Lett. 116, 173601
High precision neutron polarization for PERC
Klauser, C.
2013-01-01
The decay of the free neutron into a proton, an electron and an anti-electron neutrino offers a simple system to study the semi-leptonic weak decay. High precision measurements of angular correlation coefficients of this decay provide the opportunity to test the standard model on the low energy frontier. The Proton Electron Radiation Channel PERC is part of a new generation of expriments pushing the accuracy of such an angular correlation coefficient measurement towards 10 -4 . Past experiments have been limited to an accuracy of 10 -3 with uncertainties on the neutron polarization as one of the leading systematic errors. This thesis focuses on the development of a stable, highly precise neutron polarization for a large, divergent cold neutron beam. A diagnostic tool that provides polarization higher than 99.99 % and analyzes with an accuracy of 10 -4 , the Opaque Test Bench, is presented and validated. It consists of two highly opaque polarized helium cells. The Opaque Test Bench reveals depolarizing effects in polarizing supermirrors commonly used for polarization in neutron decay experiments. These effects are investigated in detail. They are due to imperfect lateral magnetization in supermirror layers and can be minimized by significantly increased magnetizing fields and low incidence angle and supermirror factor m. A subsequent test in the crossed (X-SM) geometry demonstrated polarizations up to 99.97% from supermirrors only, improving neutron polarization with supermirrors by an order of magnitude. The thesis also discusses other neutron optical components of the PERC beamline: Monte-Carlo simulations of the beamline under consideration of the primary guide are carried out. In addition, calculation shows that PERC would statistically profit from an installation at the European Spallation source. Furthermore, beamline components were tested. A radio-frequency spin flipper was confirmed to work with an efficiency higher than 0.9999. (author) [de
Polarized electron sources for linear colliders
Clendenin, J.E.; Ecklund, S.D.; Miller, R.H.; Schultz, D.C.; Sheppard, J.C.
1992-07-01
Linear colliders require high peak current beams with low duty factors. Several methods to produce polarized e - beams for accelerators have been developed. The SLC, the first linear collider, utilizes a photocathode gun with a GaAs cathode. Although photocathode sources are probably the only practical alternative for the next generation of linear colliders, several problems remain to be solved, including high voltage breakdown which poisons the cathode, charge limitations that are associated with the condition of the semiconductor cathode, and a relatively low polarization of ≤5O%. Methods to solve or at least greatly reduce the impact of each of these problems are at hand
Correlated Levy Noise in Linear Dynamical Systems
Srokowski, T.
2011-01-01
Linear dynamical systems, driven by a non-white noise which has the Levy distribution, are analysed. Noise is modelled by a specific stochastic process which is defined by the Langevin equation with a linear force and the Levy distributed symmetric white noise. Correlation properties of the process are discussed. The Fokker-Planck equation driven by that noise is solved. Distributions have the Levy shape and their width, for a given time, is smaller than for processes in the white noise limit. Applicability of the adiabatic approximation in the case of the linear force is discussed. (author)
The Age of Precision Cosmology
Chuss, David T.
2012-01-01
In the past two decades, our understanding of the evolution and fate of the universe has increased dramatically. This "Age of Precision Cosmology" has been ushered in by measurements that have both elucidated the details of the Big Bang cosmology and set the direction for future lines of inquiry. Our universe appears to consist of 5% baryonic matter; 23% of the universe's energy content is dark matter which is responsible for the observed structure in the universe; and 72% of the energy density is so-called "dark energy" that is currently accelerating the expansion of the universe. In addition, our universe has been measured to be geometrically flat to 1 %. These observations and related details of the Big Bang paradigm have hinted that the universe underwent an epoch of accelerated expansion known as Uinflation" early in its history. In this talk, I will review the highlights of modern cosmology, focusing on the contributions made by measurements of the cosmic microwave background, the faint afterglow of the Big Bang. I will also describe new instruments designed to measure the polarization of the cosmic microwave background in order to search for evidence of cosmic inflation.
High precision redundant robotic manipulator
Young, K.K.D.
1998-01-01
A high precision redundant robotic manipulator for overcoming contents imposed by obstacles or imposed by a highly congested work space is disclosed. One embodiment of the manipulator has four degrees of freedom and another embodiment has seven degrees of freedom. Each of the embodiments utilize a first selective compliant assembly robot arm (SCARA) configuration to provide high stiffness in the vertical plane, a second SCARA configuration to provide high stiffness in the horizontal plane. The seven degree of freedom embodiment also utilizes kinematic redundancy to provide the capability of avoiding obstacles that lie between the base of the manipulator and the end effector or link of the manipulator. These additional three degrees of freedom are added at the wrist link of the manipulator to provide pitch, yaw and roll. The seven degrees of freedom embodiment uses one revolute point per degree of freedom. For each of the revolute joints, a harmonic gear coupled to an electric motor is introduced, and together with properly designed based servo controllers provide an end point repeatability of less than 10 microns. 3 figs
Studying antimatter with laser precision
Katarina Anthony
2012-01-01
The next generation of antihydrogen trapping devices, ALPHA-2, is moving into CERN’s Antiproton Decelerator (AD) hall. This brand-new experiment will allow the ALPHA collaboration to conduct studies of antimatter with greater precision. ALPHA spokesperson Jeffrey Hangst was recently awarded a grant by the Carlsberg Foundation, which will be used to purchase equipment for the new experiment. A 3-D view of the new magnet (in blue) and cryostat. The red lines show the paths of laser beams. LHC-type current leads for the superconducting magnets are visible on the top-right of the image. The ALPHA collaboration has been working to trap and study antihydrogen since 2006. Using antiprotons provided by CERN’s Antiproton Decelerator (AD), ALPHA was the first experiment to trap antihydrogen and to hold it long enough to study its properties. “The new ALPHA-2 experiment will use integrated lasers to probe the trapped antihydrogen,” explains Jeffrey Hangst, ALP...
Blyth, T S
2002-01-01
Most of the introductory courses on linear algebra develop the basic theory of finite dimensional vector spaces, and in so doing relate the notion of a linear mapping to that of a matrix. Generally speaking, such courses culminate in the diagonalisation of certain matrices and the application of this process to various situations. Such is the case, for example, in our previous SUMS volume Basic Linear Algebra. The present text is a continuation of that volume, and has the objective of introducing the reader to more advanced properties of vector spaces and linear mappings, and consequently of matrices. For readers who are not familiar with the contents of Basic Linear Algebra we provide an introductory chapter that consists of a compact summary of the prerequisites for the present volume. In order to consolidate the student's understanding we have included a large num ber of illustrative and worked examples, as well as many exercises that are strategi cally placed throughout the text. Solutions to the ex...
Precisely Tailored DNA Nanostructures and their Theranostic Applications.
Zhu, Bing; Wang, Lihua; Li, Jiang; Fan, Chunhai
2017-12-01
A critical challenge in nanotechnology is the limited precision and controllability of the structural parameters, which brings about concerns in uniformity, reproducibility and performance. Self-assembled DNA nanostructures, as a newly emerged type of nano-biomaterials, possess low-nanometer precision, excellent programmability and addressability. They can precisely arrange various molecules and materials to form spatially ordered complex, resulting in unambiguous physical or chemical properties. Because of these, DNA nanostructures have shown great promise in numerous biomedical theranostic applications. In this account, we briefly review the history and advances on construction of DNA nanoarchitectures and superstructures with accurate structural parameters. We focus on recent progress in exploiting these DNA nanostructures as platforms for quantitative biosensing, intracellular diagnosis, imaging, and smart drug delivery. We also discuss key challenges in practical applications. © 2017 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Advancing Precision Nuclear Medicine and Molecular Imaging for Lymphoma.
Wright, Chadwick L; Maly, Joseph J; Zhang, Jun; Knopp, Michael V
2017-01-01
PET with fluorodeoxyglucose F 18 ( 18 F FDG-PET) is a meaningful biomarker for the detection, targeted biopsy, and treatment of lymphoma. This article reviews the evolution of 18 F FDG-PET as a putative biomarker for lymphoma and addresses the current capabilities, challenges, and opportunities to enable precision medicine practices for lymphoma. Precision nuclear medicine is driven by new imaging technologies and methodologies to more accurately detect malignant disease. Although quantitative assessment of response is limited, such technologies will enable a more precise metabolic mapping with much higher definition image detail and thus may make it a robust and valid quantitative response assessment methodology. Copyright © 2016 Elsevier Inc. All rights reserved.
Progress on $e^{+}e^{-}$ linear colliders
CERN. Geneva. Audiovisual Unit; Siemann, Peter
2002-01-01
Physics issues. The physics program will be reviewed for e+e- linear colliders in the TeV energy range. At these prospective facilities central issues of particle physics can be addressed, the problem of mass, unification and structure of space-time. In this context the two lectures will focus on analyses of the Higgs mechanism, supersymmetry and extra space dimensions. Moreover, high-precision studies of the top-quark and the gauge boson sector will be discussed. Combined with LHC results, a comprehensive picture can be developed of physics at the electroweak scale and beyond. Designs and technologies (R. Siemann - 29, 30, 31 May) The physics and technologies of high energy linear colliders will be reviewed. Fundamental concepts of linear colliders will be introduced. They will be discussed in: the context of the Stanford Linear Collider where many ideas changed and new ones were developed in response to operational experience. the requirements for future linear colliders. The different approaches for reac...
Fractional Diffusion Limit for Collisional Kinetic Equations
Mellet, Antoine; Mischler, Sté phane; Mouhot, Clé ment
2010-01-01
This paper is devoted to diffusion limits of linear Boltzmann equations. When the equilibrium distribution function is a Maxwellian distribution, it is well known that for an appropriate time scale, the small mean free path limit gives rise to a
Strategy for Realizing High-Precision VUV Spectro-Polarimeter
Ishikawa, R.; Narukage, N.; Kubo, M.; Ishikawa, S.; Kano, R.; Tsuneta, S.
2014-12-01
Spectro-polarimetric observations in the vacuum ultraviolet (VUV) range are currently the only means to measure magnetic fields in the upper chromosphere and transition region of the solar atmosphere. The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) aims to measure linear polarization at the hydrogen Lyman- α line (121.6 nm). This measurement requires a polarization sensitivity better than 0.1 %, which is unprecedented in the VUV range. We here present a strategy with which to realize such high-precision spectro-polarimetry. This involves the optimization of instrument design, testing of optical components, extensive analyses of polarization errors, polarization calibration of the instrument, and calibration with onboard data. We expect that this strategy will aid the development of other advanced high-precision polarimeters in the UV as well as in other wavelength ranges.
Mamyrin, B.A.; Shmikk, D.V.
1979-01-01
A description and operating principle of a linear mass reflectron with V-form trajectory of ion motion -a new non-magnetic time-of-flight mass spectrometer with high resolution are presented. The ion-optical system of the device consists of an ion source with ionization by electron shock, of accelerating gaps, reflector gaps, a drift space and ion detector. Ions move in the linear mass refraction along the trajectories parallel to the axis of the analyzer chamber. The results of investigations into the experimental device are given. With an ion drift length of 0.6 m the device resolution is 1200 with respect to the peak width at half-height. Small-sized mass spectrometric transducers with high resolution and sensitivity may be designed on the base of the linear mass reflectron principle
Olver, Peter J
2018-01-01
This textbook develops the essential tools of linear algebra, with the goal of imparting technique alongside contextual understanding. Applications go hand-in-hand with theory, each reinforcing and explaining the other. This approach encourages students to develop not only the technical proficiency needed to go on to further study, but an appreciation for when, why, and how the tools of linear algebra can be used across modern applied mathematics. Providing an extensive treatment of essential topics such as Gaussian elimination, inner products and norms, and eigenvalues and singular values, this text can be used for an in-depth first course, or an application-driven second course in linear algebra. In this second edition, applications have been updated and expanded to include numerical methods, dynamical systems, data analysis, and signal processing, while the pedagogical flow of the core material has been improved. Throughout, the text emphasizes the conceptual connections between each application and the un...
Banach, S
1987-01-01
This classic work by the late Stefan Banach has been translated into English so as to reach a yet wider audience. It contains the basics of the algebra of operators, concentrating on the study of linear operators, which corresponds to that of the linear forms a1x1 + a2x2 + ... + anxn of algebra.The book gathers results concerning linear operators defined in general spaces of a certain kind, principally in Banach spaces, examples of which are: the space of continuous functions, that of the pth-power-summable functions, Hilbert space, etc. The general theorems are interpreted in various mathematical areas, such as group theory, differential equations, integral equations, equations with infinitely many unknowns, functions of a real variable, summation methods and orthogonal series.A new fifty-page section (``Some Aspects of the Present Theory of Banach Spaces'''') complements this important monograph.
Høskuldsson, Agnar
1996-01-01
Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....
Linear programming using Matlab
Ploskas, Nikolaos
2017-01-01
This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus. The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...
Anon.
1994-01-01
The aim of the TESLA (TeV Superconducting Linear Accelerator) collaboration (at present 19 institutions from seven countries) is to establish the technology for a high energy electron-positron linear collider using superconducting radiofrequency cavities to accelerate its beams. Another basic goal is to demonstrate that such a collider can meet its performance goals in a cost effective manner. For this the TESLA collaboration is preparing a 500 MeV superconducting linear test accelerator at the DESY Laboratory in Hamburg. This TTF (TESLA Test Facility) consists of four cryomodules, each approximately 12 m long and containing eight 9-cell solid niobium cavities operating at a frequency of 1.3 GHz
Lexan Linear Shaped Charge Holder with Magnets and Backing Plate
Maples, Matthew W.; Dutton, Maureen L.; Hacker, Scott C.; Dean, Richard J.; Kidd, Nicholas; Long, Chris; Hicks, Robert C.
2013-01-01
A method was developed for cutting a fabric structural member in an inflatable module, without damaging the internal structure of the module, using linear shaped charge. Lexan and magnets are used in a charge holder to precisely position the linear shaped charge over the desired cut area. Two types of charge holders have been designed, each with its own backing plate. One holder cuts fabric straps in the vertical configuration, and the other charge holder cuts fabric straps in the horizontal configuration.
Evaluation of measurement precision errors at different bone density values
Wilson, M.; Wong, J.; Bartlett, M.; Lee, N.
2002-01-01
Full text: The precision error commonly used in serial monitoring of BMD values using Dual Energy X Ray Absorptometry (DEXA) is 0.01-0.015g/cm - for both the L2 L4 lumbar spine and total femur. However, this limit is based on normal individuals with bone densities similar to the population mean. The purpose of this study was to systematically evaluate precision errors over the range of bone density values encountered in clinical practice. In 96 patients a BMD scan of the spine and femur was immediately repeated by the same technologist with the patient taken off the bed and repositioned between scans. Nine technologists participated. Values were obtained for the total femur and spine. Each value was classified as low range (0.75-1.05 g/cm ) and medium range (1.05- 1.35g/cm ) for the spine, low range (0.55 0. 85 g/cm ) and medium range (0.85-1.15 g/cm ) for the total femur. Results show that the precision error was significantly lower in the medium range for total femur results with the medium range value at 0.015 g/cm - and the low range at 0.025 g/cm - (p<0.01). No significant difference was found for the spine results. We also analysed precision errors between three technologists and found a significant difference (p=0.05) occurred between only two technologists and this was seen in the spine data only. We conclude that there is some evidence that the precision error increases at the outer limits of the normal bone density range. Also, the results show that having multiple trained operators does not greatly increase the BMD precision error. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc
Linearly Adjustable International Portfolios
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-09-01
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
Linearly Adjustable International Portfolios
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-01-01
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
Barkman, W.E.; Adams, W.Q.; Berrier, B.R.
1978-01-01
A linear induction motor has been operated on a test bed with a feedback pulse resolution of 5 nm (0.2 μin). Slewing tests with this slide drive have shown positioning errors less than or equal to 33 nm (1.3 μin) at feedrates between 0 and 25.4 mm/min (0-1 ipm). A 0.86-m (34-in)-stroke linear motor is being investigated, using the SPACO machine as a test bed. Initial results were encouraging, and work is continuing to optimize the servosystem compensation
Hogben, Leslie
2013-01-01
With a substantial amount of new material, the Handbook of Linear Algebra, Second Edition provides comprehensive coverage of linear algebra concepts, applications, and computational software packages in an easy-to-use format. It guides you from the very elementary aspects of the subject to the frontiers of current research. Along with revisions and updates throughout, the second edition of this bestseller includes 20 new chapters.New to the Second EditionSeparate chapters on Schur complements, additional types of canonical forms, tensors, matrix polynomials, matrix equations, special types of
Linear Algebra Thoroughly Explained
Vujičić, Milan
2008-01-01
Linear Algebra Thoroughly Explained provides a comprehensive introduction to the subject suitable for adoption as a self-contained text for courses at undergraduate and postgraduate level. The clear and comprehensive presentation of the basic theory is illustrated throughout with an abundance of worked examples. The book is written for teachers and students of linear algebra at all levels and across mathematics and the applied sciences, particularly physics and engineering. It will also be an invaluable addition to research libraries as a comprehensive resource book for the subject.
High precision anatomy for MEG.
Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth
2014-02-01
Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. © 2013. Published by Elsevier Inc. All rights reserved.
High precision anatomy for MEG☆
Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth
2014-01-01
Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673
2013-05-10
AND VICTIM- ~ vAP BLAMING 4. AMERICA, LINEARLY CYCUCAL AF IMT 1768, 19840901, V5 PREVIOUS EDITION WILL BE USED. C2C Jessica Adams Dr. Brissett...his desires, his failings, and his aspirations follow the same general trend throughout history and throughout cultures. The founding fathers sought
Southworth, B.
1985-01-01
The peak of the construction phase of the Stanford Linear Collider, SLC, to achieve 50 GeV electron-positron collisions has now been passed. The work remains on schedule to attempt colliding beams, initially at comparatively low luminosity, early in 1987. (orig./HSI).
Mafra Neto, F.
1992-01-01
The dose of gamma radiation from a linear source of cesium 137 is obtained, presenting two difficulties: oblique filtration of radiation when cross the platinum wall, in different directions, and dose connection due to the scattering by the material mean of propagation. (C.G.C.)
Resistors Improve Ramp Linearity
Kleinberg, L. L.
1982-01-01
Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.
LINEAR COLLIDERS: 1992 workshop
Settles, Ron; Coignet, Guy
1992-01-01
As work on designs for future electron-positron linear colliders pushes ahead at major Laboratories throughout the world in a major international collaboration framework, the LC92 workshop held in Garmisch Partenkirchen this summer, attended by 200 machine and particle physicists, provided a timely focus
Brameier, Markus
2007-01-01
Presents a variant of Genetic Programming that evolves imperative computer programs as linear sequences of instructions, in contrast to the more traditional functional expressions or syntax trees. This book serves as a reference for researchers, but also contains sufficient introduction for students and those who are new to the field
Takeda, Seishi
1992-01-01
The status of R and D of future e + e - linear colliders proposed by the institutions throughout the world is described including the JLC, NLC, VLEPP, CLIC, DESY/THD and TESLA projects. The parameters and RF sources are discussed. (G.P.) 36 refs.; 1 tab
Precision fiducialization of transport components
Fischer, G.E.; Bressler, V.E.; Cobb, J.K.; Jensen, D.R.; Ruland, R.E.; Walz, H.V.; Williams, S.H.
1992-03-01
The Final Focus Test Beam (FFTB) is a transport line designed to test both concept and advanced technology for application to future linear colliders. It is currently under construction at SLAC in the central beam line. Most of the quadrupoles of the FFTB have ab initio alignment tolerances of less than 30 microns, if the planned for beam based alignment tuning procedure is to converge. For such placement tolerances to have any meaning requires that the coordinates of the effective centers, seen by the beam particles, be tansferred to tooling (that can be reached by mechanical or optical alignment methods) located on the outside of the components to comparable or better values. We have constructed an apparatus that simultaneously locates to micron tolerances, the effective magnetic center of fussing lenses, as well as the electrical center of beam position monitors (BPM) imbedded therein, and once located, for transferring these coordinates to specially mounted tooling frames that supported the external retroreflectors used in a laser tracker based alignment of the beam line. Details of construction as well as experimental results from the method are presented
Precision Medicine, Cardiovascular Disease and Hunting Elephants.
Joyner, Michael J
2016-01-01
Precision medicine postulates improved prediction, prevention, diagnosis and treatment of disease based on patient specific factors especially DNA sequence (i.e., gene) variants. Ideas related to precision medicine stem from the much anticipated "genetic revolution in medicine" arising seamlessly from the human genome project (HGP). In this essay I deconstruct the concept of precision medicine and raise questions about the validity of the paradigm in general and its application to cardiovascular disease. Thus far precision medicine has underperformed based on the vision promulgated by enthusiasts. While niche successes for precision medicine are likely, the promises of broad based transformation should be viewed with skepticism. Open discussion and debate related to precision medicine are urgently needed to avoid misapplication of resources, hype, iatrogenic interventions, and distraction from established approaches with ongoing utility. Failure to engage in such debate will lead to negative unintended consequences from a revolution that might never come. Copyright © 2016 Elsevier Inc. All rights reserved.
Yang, Yanchao
2013-05-01
We present a method to determine the precise shape of a dynamic object from video. This problem is fundamental to computer vision, and has a number of applications, for example, 3D video/cinema post-production, activity recognition and augmented reality. Current tracking algorithms that determine precise shape can be roughly divided into two categories: 1) Global statistics partitioning methods, where the shape of the object is determined by discriminating global image statistics, and 2) Joint shape and appearance matching methods, where a template of the object from the previous frame is matched to the next image. The former is limited in cases of complex object appearance and cluttered background, where global statistics cannot distinguish between the object and background. The latter is able to cope with complex appearance and a cluttered background, but is limited in cases of camera viewpoint change and object articulation, which induce self-occlusions and self-disocclusions of the object of interest. The purpose of this thesis is to model self-occlusion/disocclusion phenomena in a joint shape and appearance tracking framework. We derive a non-linear dynamic model of the object shape and appearance taking into account occlusion phenomena, which is then used to infer self-occlusions/disocclusions, shape and appearance of the object in a variational optimization framework. To ensure robustness to other unmodeled phenomena that are present in real-video sequences, the Kalman filter is used for appearance updating. Experiments show that our method, which incorporates the modeling of self-occlusion/disocclusion, increases the accuracy of shape estimation in situations of viewpoint change and articulation, and out-performs current state-of-the-art methods for shape tracking.
Shuffle motor: a high force, high precision linear electrostatic stepper motor
Tas, Niels Roelof; Wissink, Jeroen; Sander, A.F.M.; Sander, Louis; Lammerink, Theodorus S.J.; Elwenspoek, Michael Curt
1997-01-01
The shuffle motor is a electrostatic stepper motor that employs a mechanical transformation to obtain high forces and small steps. A model has been made to calculate the driving voltage, step size and maximum load to pull as well as the optimal geometry. Tests results are an effective step size of
Superior intraparietal sulcus controls the variability of visual working memory precision
Galeano Weber, E.M.; Peters, B.; Hahn, T.; Bledowski, C.; Fiebach, C.J.
2016-01-01
Limitations of working memory (WM) capacity depend strongly on the cognitive resources that are available for maintaining WM contents in an activated state. Increasing the number of items to be maintained in WM was shown to reduce the precision of WM and to increase the variability of WM precision
Far-Field Superresolution of Thermal Electromagnetic Sources at the Quantum Limit.
Nair, Ranjith; Tsang, Mankei
2016-11-04
We obtain the ultimate quantum limit for estimating the transverse separation of two thermal point sources using a given imaging system with limited spatial bandwidth. We show via the quantum Cramér-Rao bound that, contrary to the Rayleigh limit in conventional direct imaging, quantum mechanics does not mandate any loss of precision in estimating even deep sub-Rayleigh separations. We propose two coherent measurement techniques, easily implementable using current linear-optics technology, that approach the quantum limit over an arbitrarily large range of separations. Our bound is valid for arbitrary source strengths, all regions of the electromagnetic spectrum, and for any imaging system with an inversion-symmetric point-spread function. The measurement schemes can be applied to microscopy, optical sensing, and astrometry at all wavelengths.
The Development of Precise Engineering Surveying Technology
LI Guangyun
2017-10-01
Full Text Available With the construction of big science projects in China, the precise engineering surveying technology developed rapidly in the 21th century. Firstly, the paper summarized up the current development situation for the precise engineering surveying instrument and theory. Then the three typical cases of the precise engineering surveying practice such as accelerator alignment, industry measurement and high-speed railway surveying technology are focused.
Modeling and control of precision actuators
Kiong, Tan Kok
2013-01-01
IntroductionGrowing Interest in Precise ActuatorsTypes of Precise ActuatorsApplications of Precise ActuatorsNonlinear Dynamics and ModelingHysteresisCreepFrictionForce RipplesIdentification and Compensation of Preisach Hysteresis in Piezoelectric ActuatorsSVD-Based Identification and Compensation of Preisach HysteresisHigh-Bandwidth Identification and Compensation of Hysteretic Dynamics in Piezoelectric ActuatorsConcluding RemarksIdentification and Compensation of Frict
THK: CLB Crossed Linear Bearing Seismic Isolators
Toniolo, Roberto
2008-01-01
This text highlights the new seismic isolation technology called CLB (Crossed Linear Bearing), which is made of linear guides with recirculating steel ball technology. It describes specifications and building characteristics, provides examples of seismic isolation and application functionalities and shows experimental data. Since 1994, the constant commitment by Japan to develop diversified anti-seismic systems based on the precise needs of the structures to protect and the areas where they were built has led to the creation of important synergy between the research institutions of leading Japanese companies and THK's Centre for Research and Development. Their goal has been to develop new technology and solutions to allow seismic isolation to be effective in the following cases:
Induction of the Tn10 Precise Excision in E. coli Cells after Accelerated Heavy Ions Irradiation
Zhuravel, D V
2003-01-01
The influence of the irradiation of different kinds on the indication of the structural mutations in the bacteria Escherichia coli is considered. The regularities of the Tn10 precise excision after accelerated ^{4}He and ^{12}C ions irradiations with different linear energy transfer (LET) were investigated. Dose dependences of the survival and relative frequency of the Tn10 precise excision were obtained. It was shown, that the relative frequency of the Tn10 precise excision is the exponential function from the irradiation dose. Relative biological efficiency (RBE), and relative genetic efficiency (RGE) were calculated, and were treated as the function of the LET.
Precision of quantization of the hall conductivity in a finite-size sample: Power law
Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.
2006-01-01
A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples
Téllez, Helena; Druce, John; Hong, Jong-Eun; Ishihara, Tatsumi; Kilner, John A
2015-03-03
The accuracy and precision of isotopic analysis in Time-of-Flight secondary ion mass spectrometry (ToF-SIMS) relies on the appropriate reduction of the dead-time and detector saturation effects, especially when analyzing species with high ion yields or present in high concentrations. Conventional approaches to avoid these problems are based on Poisson dead-time correction and/or an overall decrease of the total secondary ion intensity by reducing the target current. This ultimately leads to poor detection limits for the minor isotopes and high uncertainties of the measured isotopic ratios. An alternative strategy consists of the attenuation of those specific secondary ions that saturate the detector, providing an effective extension of the linear dynamic range. In this work, the selective attenuation of secondary ion signals (SASI) approach is applied to the study of oxygen transport properties in electroceramic materials by isotopic labeling with stable (18)O tracer and ToF-SIMS depth profiling. The better analytical performance in terms of accuracy and precision allowed a more reliable determination of the oxygen surface exchange and diffusion coefficients while maintaining good mass resolution and limits of detection for other minor secondary ion species. This improvement is especially relevant to understand the ionic transport mechanisms and properties of solid materials, such as the parallel diffusion pathways (e.g., oxygen diffusion through bulk, grain boundary, or dislocations) in electroceramic materials with relevant applications in energy storage and conversion devices.
Quantum mechanics and precision measurements
Ramsey, N.F.
1995-01-01
The accuracies of measurements of almost all fundamental physical constants have increased by factors of about 10000 during the past 60 years. Although some of the improvements are due to greater care, most are due to new techniques based on quantum mechanics. Although the Heisenberg Uncertainty Principle often limits measurement accuracies, in many cases the validity of quantum mechanics makes possible the vastly improved measurement accuracies. Seven quantum features that have a profound influence on the science of measurements are: 1) Existence of discrete quantum states of energy. 2) Energy conservation in transitions between two states. 3) Electromagnetic radiation of frequency v is quantized with energy hv per quantum. 4) The identity principle. 5) The Heisenberg Uncertainty Principle. 6) Addition of probability amplitudes (not probabilities). 7) Wave and coherent phase phenomena. Of these seven quantum features, only the Heisenberg Uncertainty Principle limits the accuracy of measurements, and its effect is often negligibly small. The other six features make possible much more accurate measurements of quantum systems than with almost all classical systems. These effects are discussed and illustrated
Precision Electrophile Tagging in Caenorhabditis elegans.
Long, Marcus J C; Urul, Daniel A; Chawla, Shivansh; Lin, Hong-Yu; Zhao, Yi; Haegele, Joseph A; Wang, Yiran; Aye, Yimon
2018-01-16
Adduction of an electrophile to privileged sensor proteins and the resulting phenotypically dominant responses are increasingly appreciated as being essential for metazoan health. Functional similarities between the biological electrophiles and electrophilic pharmacophores commonly found in covalent drugs further fortify the translational relevance of these small-molecule signals. Genetically encodable or small-molecule-based fluorescent reporters and redox proteomics have revolutionized the observation and profiling of cellular redox states and electrophile-sensor proteins, respectively. However, precision mapping between specific redox-modified targets and specific responses has only recently begun to be addressed, and systems tractable to both genetic manipulation and on-target redox signaling in vivo remain largely limited. Here we engineer transgenic Caenorhabditis elegans expressing functional HaloTagged fusion proteins and use this system to develop a generalizable light-controlled approach to tagging a prototypical electrophile-sensor protein with native electrophiles in vivo. The method circumvents issues associated with low uptake/distribution and toxicity/promiscuity. Given the validated success of C. elegans in aging studies, this optimized platform offers a new lens with which to scrutinize how on-target electrophile signaling influences redox-dependent life span regulation.
Design of Janus Nanoparticles with Atomic Precision
Sun, Qiang; Wang, Qian; Jena, Puru; Kawazoe, Yoshi
2008-03-01
Janus nanoparticles, characterized by their anisotropic structure and interactions have added a new dimension to nanoscience because of their potential applications in biomedicine, sensors, catalysis and assembled materials. The technological applications of these nanoparticles, however, have been limited as the current chemical, physical, and biosynthetic methods lack sufficient size and shape selectivity. We report a technique where gold clusters doped with tungsten can serve as a seed that facilitates the natural growth of anisotropic nanostructures whose size and shape can be controlled with atomic precision. Using ab initio simulated annealing and molecular dynamics calculations on AunW (n>12) clusters, we discovered that the W@Au12 cage cluster forms a very stable core with the remaining Au atoms forming patchy structures on its surface. The anisotropic geometry gives rise to anisotropies in vibrational spectra, charge distributions, electronic structures, and reactivity, thus making it useful to have dual functionalities. In particular, the core-patch structure is shown to possess a hydrophilic head and a hydrophobic tail. The W@Au12 clusters can also be used as building blocks of a nano-ring with novel properties.
Precision for B-meson matrix elements
Guazzini, D.; Sommer, R.; Tantalo, N.
2007-10-01
We demonstrate how HQET and the Step Scaling Method for B-physics, pioneered by the Tor Vergata group, can be combined to reach a further improved precision. The observables considered are the mass of the b-quark and the B s -meson decay constant. The demonstration is carried out in quenched lattice QCD. We start from a small volume, where one can use a standard O(a)-improved relativistic action for the b-quark, and compute two step scaling functions which relate the observables to the large volume ones. In all steps we extrapolate to the continuum limit, separately in HQET and in QCD for masses below m b . The physical point m b is then reached by an interpolation of the continuum results in 1/m. The essential, expected and verified, feature is that the step scaling functions have a weak mass-dependence resulting in an easy interpolation to the physical point. With r 0 =0.5 fm and the experimental B s and K masses as input, we find F B s =191(6) MeV and the renormalization group invariant mass M b =6.88(10) GeV, translating into anti m b (anti m b )=4.42(6) GeV in the MS scheme. This approach seems very promising for full QCD. (orig.)
Finite-dimensional linear algebra
Gockenbach, Mark S
2010-01-01
Some Problems Posed on Vector SpacesLinear equationsBest approximationDiagonalizationSummaryFields and Vector SpacesFields Vector spaces Subspaces Linear combinations and spanning sets Linear independence Basis and dimension Properties of bases Polynomial interpolation and the Lagrange basis Continuous piecewise polynomial functionsLinear OperatorsLinear operatorsMore properties of linear operatorsIsomorphic vector spaces Linear operator equations Existence and uniqueness of solutions The fundamental theorem; inverse operatorsGaussian elimination Newton's method Linear ordinary differential eq
A non-linear algorithm for current signal filtering and peak detection in SiPM
Putignano, M; Intermite, A; Welsch, C P
2012-01-01
Read-out of Silicon Photomultipliers is commonly achieved by means of charge integration, a method particularly susceptible to after-pulsing noise and not efficient for low level light signals. Current signal monitoring, characterized by easier electronic implementation and intrinsically faster than charge integration, is also more suitable for low level light signals and can potentially result in much decreased after-pulsing noise effects. However, its use is to date limited by the need of developing a suitable read-out algorithm for signal analysis and filtering able to achieve current peak detection and measurement with the needed precision and accuracy. In this paper we present an original algorithm, based on a piecewise linear-fitting approach, to filter the noise of the current signal and hence efficiently identifying and measuring current peaks. The proposed algorithm is then compared with the optimal linear filtering algorithm for time-encoded peak detection, based on a moving average routine, and assessed in terms of accuracy, precision, and peak detection efficiency, demonstrating improvements of 1÷2 orders of magnitude in all these quality factors.
On precision of optimization in the case of incomplete information
Volf, Petr
2012-01-01
Roč. 19, č. 30 (2012), s. 170-184 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/0956 Institutional support: RVO:67985556 Keywords : stochastic optimization * censored data * Fisher information * product-limit estimator Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/SI/volf-on precision of optimization in the case of incomplete information.pdf
Conn, R.W.
1984-05-01
Recent experiments with a scoop limiter without active internal pumping have been carried out in the PDX tokamak with up to 6MW of auxiliary neutral beam heating. Experiments have also been done with a rotating head pump limiter in the PLT tokamak in conjunction with RF plasma heating. Extensive experiments have been done in the ISX-B tokamak and first experiments have been completed with the ALT-I limiter in TEXTOR. The pump limiter modules in these latter two machines have internal getter pumping. Experiments in ISX-B are with ohmic and auxiliary neutral beam heating. The results in ISX-B and TEXTOR show that active density control and particle removal is achieved with pump limiters. In ISX-B, the boundary layer (or scape-off layer) plasma partially screens the core plasma from gas injection. In both ISX-B and TEXTOR, the pressure internal to the module scales linearly with plasma density but in ISX-B, with neutral beam injection, a nonlinear increase is observed at the highest densities studied. Plasma plugging is the suspected cause. Results from PDX suggest that a region may exist in which core plasma energy confinement improves using a pump limiter during neutral beam injection. Asymmetric radial profiles and an increased edge electron temperature are observed in discharges with improved confinement. The injection of small amounts of neon into ISX-B has more clearly shown an improved electron core energy confinement during neutral beam injection. While carried out with a regular limiter, this Z-mode of operation is ideal for use with pump limiters and should be a way to achieve energy confinement times similar to values for H-mode tokamak plasmas. The implication of all these results for the design of a reactor pump limiter is described
Conn, R.W.; California Univ., Los Angeles
1984-01-01
Recent experiments with a scoop limiter without active internal pumping have been carried out in the PDX tokamak with up to 6 MW of auxiliary neutral beam heating. Experiments have also been performed with a rotating head pump limiter in the PLT tokamak in conjunction with RF plasma heating. Extensive experiments have been done in the ISX-B tokamak and first experiments have been completed with the ALT-I limiter in TEXTOR. The pump limiter modules in these latter two machines have internal getter pumping. Experiments in ISX-B are with ohmic and auxiliary neutral beam heating. The results in ISX-B and TEXTOR show that active density control and particle removal is achieved with pump limiters. In ISX-B, the boundary layer (or scrape-off layer) plasma partially screens the core plasma from gas injection. In both ISX-B and TEXTOR, the pressure internal to the module scales linearly with plasma density but in ISX-B, with neutral beam injection, a nonlinear increase is observed at the highest densities studied. Plasma plugging is the suspected cause. Results from PDX suggest that a regime may exist in which core plasma energy confinement improves using a pump limiter during neutral beam injection. Asymmetric radial profiles and an increased edge electron temperature are observed in discharges with improved confinement. The injection of small amounts of neon into ISX-B has more clearly shown an improved electron core energy confinement during neutral beam injection. While carried out with a regular limiter, this 'Z-mode' of operation is ideal for use with pump limiters and should be a way to achieve energy confinement times similar to values for H-mode tokamak plasmas. The implication of all these results for the design of a reactor pump limiter is described. (orig.)
Finding Traps in Non-linear Spin Arrays
Wiesniak, Marcin; Markiewicz, Marcin
2009-01-01
Precise knowledge of the Hamiltonian of a system is a key to many of its applications. Tasks such state transfer or quantum computation have been well studied with a linear chain, but hardly with systems, which do not possess a linear structure. While this difference does not disturb the end-to-end dynamics of a single excitation, the evolution is significantly changed in other subspaces. Here we quantify the difference between a linear chain and a pseudo-chain, which have more than one spin ...
Linearity and Non-linearity of Photorefractive effect in Materials ...
In this paper we have studied the Linearity and Non-linearity of Photorefractive effect in materials using the band transport model. For low light beam intensities the change in the refractive index is proportional to the electric field for linear optics while for non- linear optics the change in refractive index is directly proportional ...
Antares alignment gimbal positioner linear bearing tests
Day, R.D.; McKay, M.D.; Pierce, D.D.; Lujan, R.E.
1981-01-01
The data indicate that of the six configurations tested, the solid circular rails with either the wet or dry lubricant are superior to the other configurations. Therefore, these two will undergo additional tests. These tests will consist of (1) modifying the testing procedure to obtain a better estimation of the limits of precision; and (2) subjecting the bearings to moments more closely approximating the actual conditions they will undergo on the AGP
Kuznetsov, N.; Maz'ya, V.; Vainberg, B.
2002-08-01
This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'
Høskuldsson, Agnar
1996-01-01
Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...
Henneaux, Marc; Teitelboim, Claudio
2005-01-01
We show that duality transformations of linearized gravity in four dimensions, i.e., rotations of the linearized Riemann tensor and its dual into each other, can be extended to the dynamical fields of the theory so as to be symmetries of the action and not just symmetries of the equations of motion. Our approach relies on the introduction of two superpotentials, one for the spatial components of the spin-2 field and the other for their canonically conjugate momenta. These superpotentials are two-index, symmetric tensors. They can be taken to be the basic dynamical fields and appear locally in the action. They are simply rotated into each other under duality. In terms of the superpotentials, the canonical generator of duality rotations is found to have a Chern-Simons-like structure, as in the Maxwell case
Phinney, N.
1992-01-01
The SLAC Linear Collider has begun a new era of operation with the SLD detector. During 1991 there was a first engineering run for the SLD in parallel with machine improvements to increase luminosity and reliability. For the 1992 run, a polarized electron source was added and more than 10,000 Zs with an average of 23% polarization have been logged by the SLD. This paper discusses the performance of the SLC in 1991 and 1992 and the technical advances that have produced higher luminosity. Emphasis will be placed on issues relevant to future linear colliders such as producing and maintaining high current, low emittance beams and focusing the beams to the micron scale for collisions. (Author) tab., 2 figs., 18 refs
Linear waves and instabilities
Bers, A.
1975-01-01
The electrodynamic equations for small-amplitude waves and their dispersion relation in a homogeneous plasma are outlined. For such waves, energy and momentum, and their flow and transformation, are described. Perturbation theory of waves is treated and applied to linear coupling of waves, and the resulting instabilities from such interactions between active and passive waves. Linear stability analysis in time and space is described where the time-asymptotic, time-space Green's function for an arbitrary dispersion relation is developed. The perturbation theory of waves is applied to nonlinear coupling, with particular emphasis on pump-driven interactions of waves. Details of the time--space evolution of instabilities due to coupling are given. (U.S.)
Extended linear chain compounds
Linear chain substances span a large cross section of contemporary chemistry ranging from covalent polymers, to organic charge transfer com plexes to nonstoichiometric transition metal coordination complexes. Their commonality, which coalesced intense interest in the theoretical and exper imental solid state physics/chemistry communities, was based on the obser vation that these inorganic and organic polymeric substrates exhibit striking metal-like electrical and optical properties. Exploitation and extension of these systems has led to the systematic study of both the chemistry and physics of highly and poorly conducting linear chain substances. To gain a salient understanding of these complex materials rich in anomalous aniso tropic electrical, optical, magnetic, and mechanical properties, the conver gence of diverse skills and talents was required. The constructive blending of traditionally segregated disciplines such as synthetic and physical organic, inorganic, and polymer chemistry, crystallog...
Linear independence of localized magnon states
Schmidt, Heinz-Juergen; Richter, Johannes; Moessner, Roderich
2006-01-01
At the magnetic saturation field, certain frustrated lattices have a class of states known as 'localized multi-magnon states' as exact ground states. The number of these states scales exponentially with the number N of spins and hence they have a finite entropy also in the thermodynamic limit N → ∞ provided they are sufficiently linearly independent. In this paper, we present rigorous results concerning the linear dependence or independence of localized magnon states and investigate special examples. For large classes of spin lattices, including what we call the orthogonal type and the isolated type, as well as the kagome, the checkerboard and the star lattice, we have proven linear independence of all localized multi-magnon states. On the other hand, the pyrochlore lattice provides an example of a spin lattice having localized multi-magnon states with considerable linear dependence
Diamond, Jared M.
1966-01-01
1. The relation between osmotic gradient and rate of osmotic water flow has been measured in rabbit gall-bladder by a gravimetric procedure and by a rapid method based on streaming potentials. Streaming potentials were directly proportional to gravimetrically measured water fluxes. 2. As in many other tissues, water flow was found to vary with gradient in a markedly non-linear fashion. There was no consistent relation between the water permeability and either the direction or the rate of water flow. 3. Water flow in response to a given gradient decreased at higher osmolarities. The resistance to water flow increased linearly with osmolarity over the range 186-825 m-osM. 4. The resistance to water flow was the same when the gall-bladder separated any two bathing solutions with the same average osmolarity, regardless of the magnitude of the gradient. In other words, the rate of water flow is given by the expression (Om — Os)/[Ro′ + ½k′ (Om + Os)], where Ro′ and k′ are constants and Om and Os are the bathing solution osmolarities. 5. Of the theories advanced to explain non-linear osmosis in other tissues, flow-induced membrane deformations, unstirred layers, asymmetrical series-membrane effects, and non-osmotic effects of solutes could not explain the results. However, experimental measurements of water permeability as a function of osmolarity permitted quantitative reconstruction of the observed water flow—osmotic gradient curves. Hence non-linear osmosis in rabbit gall-bladder is due to a decrease in water permeability with increasing osmolarity. 6. The results suggest that aqueous channels in the cell membrane behave as osmometers, shrinking in concentrated solutions of impermeant molecules and thereby increasing membrane resistance to water flow. A mathematical formulation of such a membrane structure is offered. PMID:5945254
Fundamentals of linear algebra
Dash, Rajani Ballav
2008-01-01
FUNDAMENTALS OF LINEAR ALGEBRA is a comprehensive Text Book, which can be used by students and teachers of All Indian Universities. The Text has easy, understandable form and covers all topics of UGC Curriculum. There are lots of worked out examples which helps the students in solving the problems without anybody's help. The Problem sets have been designed keeping in view of the questions asked in different examinations.
Sander, K F
1964-01-01
Linear Network Theory covers the significant algebraic aspect of network theory, with minimal reference to practical circuits. The book begins the presentation of network analysis with the exposition of networks containing resistances only, and follows it up with a discussion of networks involving inductance and capacity by way of the differential equations. Classification and description of certain networks, equivalent networks, filter circuits, and network functions are also covered. Electrical engineers, technicians, electronics engineers, electricians, and students learning the intricacies
Non linear viscoelastic models
Agerkvist, Finn T.
2011-01-01
Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....
Superconducting linear colliders
Anon.
1990-01-01
The advantages of superconducting radiofrequency (SRF) for particle accelerators have been demonstrated by successful operation of systems in the TRISTAN and LEP electron-positron collider rings respectively at the Japanese KEK Laboratory and at CERN. If performance continues to improve and costs can be lowered, this would open an attractive option for a high luminosity TeV (1000 GeV) linear collider